Overview
This model, Amadeus-Verbo-MI-Qwen-2.5-0.5B-PT-BR-Instruct-Experimental, is a 0.5 billion parameter instruction-tuned language model developed by amadeusai. It was created using the SLERP merge method from two base models: the general-purpose Qwen/Qwen2.5-0.5B-Instruct and the Portuguese-optimized amadeusai/qwen2.5-0.5B-PT-BR-Instruct. This strategic merge aims to combine the robust capabilities of the Qwen2.5 architecture with specialized instruction-following in Brazilian Portuguese.
Key Capabilities
- Portuguese Instruction Following: Specifically fine-tuned to understand and generate responses based on instructions provided in Brazilian Portuguese.
- Qwen2.5 Architecture: Benefits from the underlying Qwen2.5 model's efficiency and performance.
- Merged Model: Utilizes the SLERP method for merging, which can lead to a balanced combination of the strengths of its constituent models.
- Extended Context Length: Features a context length of 131072 tokens, allowing for processing longer inputs and generating more extensive outputs.
Use Cases
This model is particularly well-suited for:
- Portuguese-centric AI applications: Ideal for chatbots, virtual assistants, and content generation systems requiring high proficiency in Brazilian Portuguese.
- Instruction-based tasks: Excels in scenarios where the model needs to follow specific commands or prompts to produce desired outputs.
- Research and Development: Provides a foundation for further experimentation and fine-tuning on specific Portuguese datasets or tasks.
For more technical details and citation, refer to the Amadeus-Verbo Technical Report.