amadeusai/Amadeus-Verbo-FI-Qwen2.5-1.5B-PT-BR-Instruct

TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Mar 21, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

Amadeus-Verbo-FI-Qwen2.5-1.5B-PT-BR-Instruct by amadeusai is a 1.54 billion parameter, Transformer-based causal language model fine-tuned for Brazilian Portuguese. Developed from the Qwen2.5-1.5B-Instruct base model, it was trained for 2 epochs on a 600k instruction dataset. This model is specifically optimized for instruction-following tasks in Brazilian Portuguese, featuring a 32,768 token context length.

Loading preview...

Amadeus-Verbo-FI-Qwen2.5-1.5B-PT-BR-Instruct Overview

Amadeus-Verbo-FI-Qwen2.5-1.5B-PT-BR-Instruct is a specialized large language model (LLM) developed by amadeusai, focusing on Brazilian Portuguese (PT-BR). It is built upon the robust Qwen2.5-1.5B-Instruct base architecture, a Transformer-based model incorporating features like RoPE, SwiGLU, RMSNorm, and Attention QKV bias.

Key Capabilities & Features

  • Brazilian Portuguese Specialization: Fine-tuned specifically for the nuances of Brazilian Portuguese, making it highly effective for PT-BR language tasks.
  • Instruction Following: Developed through fine-tuning with a substantial 600k instruction dataset over 2 epochs, enhancing its ability to understand and execute instructions.
  • Compact yet Capable: Features 1.54 billion parameters (1.31B non-embedding) and 28 layers, offering a balance of performance and efficiency.
  • Extended Context Window: Supports a significant context length of 32,768 tokens, allowing for processing longer inputs and maintaining conversational coherence.
  • Modern Architecture: Leverages advanced Transformer components for efficient and effective language processing.

Ideal Use Cases

  • Brazilian Portuguese NLP Applications: Excellent for chatbots, content generation, summarization, and translation tasks specifically targeting the Brazilian Portuguese language.
  • Instruction-Based Tasks: Well-suited for applications requiring the model to follow specific commands or generate structured outputs based on instructions.
  • Resource-Efficient Deployment: Its 1.5B parameter count makes it a strong candidate for scenarios where computational resources are a consideration, while still providing strong PT-BR performance.

For more technical details, refer to the associated research article: Amadeus-Verbo Technical Report.