Polygl0t/Tucano2-qwen-1.5B-Base
TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Jan 13, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

Polygl0t/Tucano2-qwen-1.5B-Base is a 1.5 billion parameter decoder-only transformer model, continually pretrained from Qwen3-1.7B-Base. Developed by Polygl0t, it is specifically optimized for the Portuguese language, achieving state-of-the-art performance across several Portuguese benchmarks. This model is designed as a foundation for research and development in Portuguese language modeling, particularly for comparative experiments on continual pretraining effects.

Loading preview...