Polygl0t/Tucano2-qwen-3.7B-Instruct
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Feb 12, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Warm
Polygl0t/Tucano2-qwen-3.7B-Instruct is a 3.76 billion parameter instruction-tuned Portuguese language model built on a Qwen3 Transformer architecture. Developed by Polygl0t, it was trained using supervised fine-tuning and Anchored Preference Optimization on specific Portuguese datasets. This model excels at tasks such as retrieval-augmented generation, function calling, summarization, and structured output generation, making it suitable for research and development in Portuguese language modeling.
Loading preview...