Polygl0t/Tucano2-qwen-3.7B-Base
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Feb 3, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

Polygl0t/Tucano2-qwen-3.7B-Base is a 3.7 billion parameter decoder-only transformer model continually pretrained from Qwen3-4B-Base by Polygl0t. It is specifically optimized for the Portuguese language, adapting its tokenizer for lexical, morphological, and orthographic properties. The model achieves state-of-the-art performance on Portuguese language benchmarks and is intended as a foundation for research and development in this low-resource language.

Loading preview...