Amadeus-Verbo-FI-Qwen2.5-0.5B-PT-BR-Instruct is a 0.49 billion parameter Transformer-based causal language model developed by amadeusai, fine-tuned from Qwen2.5-0.5B-Instruct. Optimized specifically for Brazilian Portuguese, it was trained for 2 epochs on a 600k instruction dataset. This model is designed for general instruction-following tasks in Brazilian Portuguese, leveraging a 32,768-token context length.
Loading preview...
Amadeus-Verbo-FI-Qwen2.5-0.5B-PT-BR-Instruct Overview
This model, developed by amadeusai, is a Brazilian Portuguese (PT-BR) instruction-tuned language model based on the Qwen2.5-0.5B-Instruct architecture. It features 0.49 billion parameters and was fine-tuned over 2 epochs using a 600k instruction dataset, making it specialized for conversational and instruction-following tasks in Portuguese.
Key Capabilities & Technical Specifications
- Architecture: Transformer-based with RoPE, SwiGLU, RMSNorm, and Attention QKV bias.
- Parameters: 0.49 billion total parameters (0.36 billion non-embedding).
- Context Length: Supports a substantial context window of 32,768 tokens.
- Language Focus: Exclusively trained and optimized for Brazilian Portuguese.
- Training: Fine-tuned from Qwen2.5-0.5B-Instruct with a large instruction dataset.
Intended Use Cases
This model is particularly well-suited for applications requiring robust language understanding and generation in Brazilian Portuguese. Its instruction-tuned nature makes it effective for:
- Chatbots and conversational AI in Portuguese.
- Content generation and summarization in Portuguese.
- Instruction following for various tasks specified in Portuguese.
- Educational tools and language learning applications for Portuguese speakers.
For more technical details, refer to the associated research article: Amadeus-Verbo Technical Report.