Polygl0t/Tucano2-qwen-1.5B-Instruct
TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Feb 5, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

Polygl0t/Tucano2-qwen-1.5B-Instruct is a 1.5 billion parameter instruction-tuned Transformer-based model developed by Polygl0t, built on the Qwen3 architecture. Optimized for the Portuguese language, it excels in tasks such as retrieval-augmented generation, function calling, summarization, and structured output generation. This compact model, with a 4,096 token context length, is designed for research and development in Portuguese language modeling.

Loading preview...