Polygl0t/Tucano2-qwen-3.7B-Think
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Feb 7, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

Polygl0t/Tucano2-qwen-3.7B-Think is a 3.76 billion parameter instruction-tuned Portuguese language model built on the Qwen3 Transformer architecture. Developed by Polygl0t, it is specifically fine-tuned for reasoning tasks, generating Chain-of-Thought (CoT) traces encapsulated within and tokens. This model excels in knowledge and reasoning benchmarks for Portuguese, making it suitable for research and development in Portuguese language modeling.

Loading preview...