amadeusai/Amadeus-Verbo-FI-Qwen2.5-7B-PT-BR-Instruct
Amadeus-Verbo-FI-Qwen2.5-7B-PT-BR-Instruct is a 7.61 billion parameter Brazilian-Portuguese language model developed by amadeusai. Fine-tuned from the Qwen2.5-7B-Instruct base model over two epochs with a 600k instruction dataset, it features a Transformer architecture with a context length of 131,072 tokens. This model is specifically optimized for generating text in Brazilian Portuguese, making it ideal for applications requiring high-quality, localized language generation.
Loading preview...
Amadeus-Verbo-FI-Qwen2.5-7B-PT-BR-Instruct Overview
Amadeus-Verbo-FI-Qwen2.5-7B-PT-BR-Instruct is a specialized large language model (LLM) developed by amadeusai, specifically fine-tuned for Brazilian Portuguese (PT-BR). It is built upon the robust Qwen2.5-7B-Instruct base model, undergoing two epochs of fine-tuning with a substantial 600k instruction dataset.
Key Capabilities
- Brazilian Portuguese Language Generation: Optimized for high-quality text generation in PT-BR.
- Large Context Window: Supports a context length of up to 131,072 tokens, enabling processing of extensive inputs and generating detailed responses.
- Transformer Architecture: Utilizes a Transformer-based architecture incorporating RoPE, SwiGLU, RMSNorm, and Attention QKV bias.
- Instruction Following: Fine-tuned with a large instruction dataset to enhance its ability to follow complex prompts.
Good for
- Applications requiring native-level Brazilian Portuguese text generation.
- Developing chatbots, virtual assistants, or content creation tools for the Brazilian market.
- Tasks that benefit from a large context window, such as summarizing long documents or engaging in extended conversations in PT-BR.
- Researchers and developers focusing on Portuguese NLP, particularly for the Brazilian variant. For more technical details, refer to the associated article: Amadeus-Verbo Technical Report.