amadeusai/Amadeus-Verbo-FI-Qwen2.5-14B-PT-BR-Instruct
Amadeus-Verbo-FI-Qwen2.5-14B-PT-BR-Instruct is a 14.7 billion parameter, Transformer-based causal language model developed by amadeusai. Fine-tuned from Qwen2.5-14B-Instruct over two epochs with 600k instructions, this model is specifically optimized for generating text in Brazilian Portuguese. It features a context length of 131,072 tokens and is designed for applications requiring high-quality, instruction-following responses in Portuguese.
Loading preview...
Amadeus-Verbo-FI-Qwen2.5-14B-PT-BR-Instruct Overview
Amadeus-Verbo-FI-Qwen2.5-14B-PT-BR-Instruct is a specialized large language model (LLM) developed by amadeusai, building upon the robust Qwen2.5-14B-Instruct architecture. This model has been meticulously fine-tuned for two epochs using a substantial dataset of 600,000 instructions, making it highly proficient in understanding and generating content in Brazilian Portuguese.
Key Capabilities
- Brazilian Portuguese Specialization: Optimized for high-quality text generation and instruction following in PT-BR.
- Robust Architecture: Based on a Transformer model incorporating RoPE, SwiGLU, RMSNorm, and Attention QKV bias.
- Significant Context Window: Supports a context length of up to 131,072 tokens, enabling processing of extensive inputs and generating detailed responses.
- Instruction Following: Fine-tuned with a large instruction dataset to accurately respond to user prompts.
Good For
- Applications requiring advanced natural language understanding and generation in Brazilian Portuguese.
- Developing chatbots, virtual assistants, or content creation tools for the Brazilian market.
- Tasks that benefit from a large context window, such as summarizing long documents or engaging in extended conversations in Portuguese.
- Researchers and developers focusing on Portuguese NLP, as detailed in their technical report.