Polygl0t/Tucano2-qwen-0.5B-Think
TEXT GENERATIONConcurrency Cost:1Model Size:0.8BQuant:BF16Ctx Length:32kPublished:Feb 5, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

Polygl0t/Tucano2-qwen-0.5B-Think is a 0.8 billion parameter instruction-tuned Portuguese language model developed by Polygl0t, built on a Transformer-based Qwen3 architecture. It is specifically fine-tuned for reasoning tasks, generating Chain-of-Thought (CoT) traces within special and tokens. This model is intended for research and development in Portuguese language modeling, particularly for applications requiring explicit reasoning steps.

Loading preview...