Polygl0t/Tucano2-qwen-1.5B-Think
TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Feb 5, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

Polygl0t/Tucano2-qwen-1.5B-Think is a 1.51 billion parameter instruction-tuned Portuguese language model built on the Qwen3 Transformer architecture with a 4,096 token context length. Developed by Polygl0t, it is specifically fine-tuned for reasoning tasks, generating Chain-of-Thought (CoT) traces encapsulated within and special tokens. This model is primarily intended for research and development in Portuguese language modeling, particularly for applications requiring explicit reasoning steps.

Loading preview...