amadeusai/AV-BI-Qwen2.5-14B-PT-BR-Instruct

Warm
Public
14.8B
FP8
131072
License: apache-2.0
Hugging Face
Overview

Amadeus-Verbo-BI-Qwen2.5-14B-PT-BR-Instruct Overview

This model is a 14.7 billion parameter Brazilian Portuguese (PT-BR) instruction-tuned large language model, developed by amadeusai. It is built upon the robust Qwen2.5-14B base architecture, featuring a Transformer-based design with RoPE, SwiGLU, RMSNorm, and Attention QKV bias.

Key Capabilities & Features

  • Brazilian Portuguese Specialization: Fine-tuned specifically for the Brazilian Portuguese language, making it highly effective for PT-BR natural language processing tasks.
  • Instruction Following: Trained for two epochs on a substantial 600k instruction dataset, enhancing its ability to understand and execute complex instructions.
  • Extended Context Window: Supports a significant context length of 131,072 tokens, allowing for processing and generating longer, more coherent texts.
  • Robust Architecture: Inherits the advanced architectural features of the Qwen2.5 series, ensuring strong performance in causal language modeling.

Ideal Use Cases

  • Brazilian Portuguese Applications: Suited for any application requiring high-quality language generation or understanding in Brazilian Portuguese.
  • Instruction-Based Tasks: Excels in scenarios where the model needs to follow specific commands or generate content based on detailed instructions.
  • Long-Form Content Generation: The large context window makes it suitable for tasks involving extensive documents, conversations, or detailed responses.

For more technical details, refer to the associated research article: Amadeus-Verbo Technical Report.