Amadeus-Verbo-FI-Qwen2.5-1.5B-PT-BR-Instruct Overview
Amadeus-Verbo-FI-Qwen2.5-1.5B-PT-BR-Instruct is a specialized large language model (LLM) developed by amadeusai, focusing on Brazilian Portuguese (PT-BR). It is built upon the robust Qwen2.5-1.5B-Instruct base architecture, a Transformer-based model incorporating features like RoPE, SwiGLU, RMSNorm, and Attention QKV bias.
Key Capabilities & Features
- Brazilian Portuguese Specialization: Fine-tuned specifically for the nuances of Brazilian Portuguese, making it highly effective for PT-BR language tasks.
- Instruction Following: Developed through fine-tuning with a substantial 600k instruction dataset over 2 epochs, enhancing its ability to understand and execute instructions.
- Compact yet Capable: Features 1.54 billion parameters (1.31B non-embedding) and 28 layers, offering a balance of performance and efficiency.
- Extended Context Window: Supports a significant context length of 32,768 tokens, allowing for processing longer inputs and maintaining conversational coherence.
- Modern Architecture: Leverages advanced Transformer components for efficient and effective language processing.
Ideal Use Cases
- Brazilian Portuguese NLP Applications: Excellent for chatbots, content generation, summarization, and translation tasks specifically targeting the Brazilian Portuguese language.
- Instruction-Based Tasks: Well-suited for applications requiring the model to follow specific commands or generate structured outputs based on instructions.
- Resource-Efficient Deployment: Its 1.5B parameter count makes it a strong candidate for scenarios where computational resources are a consideration, while still providing strong PT-BR performance.
For more technical details, refer to the associated research article: Amadeus-Verbo Technical Report.