Polygl0t/Tucano2-qwen-0.5B-Instruct
TEXT GENERATIONConcurrency Cost:1Model Size:0.8BQuant:BF16Ctx Length:32kPublished:Feb 5, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

Polygl0t/Tucano2-qwen-0.5B-Instruct is an instruction-tuned Portuguese language model with 0.8 billion parameters and a 32,768 token context length, built on the Qwen3 Transformer architecture. Developed by Polygl0t, it was trained using supervised fine-tuning and Anchored Preference Optimization. This compact model excels in Portuguese benchmarks for tasks like retrieval-augmented generation, function calling, summarization, and structured output generation, making it suitable for research and development in Portuguese language modeling.

Loading preview...