amadeusai/Amadeus-Verbo-MI-Qwen-2.5-0.5B-PT-BR-Instruct-Experimental

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:0.5BQuant:BF16Ctx Length:32kLicense:apache-2.0Architecture:Transformer Open Weights Warm

The Amadeus-Verbo-MI-Qwen-2.5-0.5B-PT-BR-Instruct-Experimental model by amadeusai is a 0.5 billion parameter instruction-tuned language model, merged using the SLERP method from Qwen/Qwen2.5-0.5B-Instruct and amadeusai/qwen2.5-0.5B-PT-BR-Instruct. This model is specifically optimized for instruction-following tasks in Portuguese, leveraging the Qwen2.5 architecture. It is designed for applications requiring efficient and accurate responses in Brazilian Portuguese, with a notable context length of 131072 tokens.

Loading preview...

Overview

This model, Amadeus-Verbo-MI-Qwen-2.5-0.5B-PT-BR-Instruct-Experimental, is a 0.5 billion parameter instruction-tuned language model developed by amadeusai. It was created using the SLERP merge method from two base models: the general-purpose Qwen/Qwen2.5-0.5B-Instruct and the Portuguese-optimized amadeusai/qwen2.5-0.5B-PT-BR-Instruct. This strategic merge aims to combine the robust capabilities of the Qwen2.5 architecture with specialized instruction-following in Brazilian Portuguese.

Key Capabilities

  • Portuguese Instruction Following: Specifically fine-tuned to understand and generate responses based on instructions provided in Brazilian Portuguese.
  • Qwen2.5 Architecture: Benefits from the underlying Qwen2.5 model's efficiency and performance.
  • Merged Model: Utilizes the SLERP method for merging, which can lead to a balanced combination of the strengths of its constituent models.
  • Extended Context Length: Features a context length of 131072 tokens, allowing for processing longer inputs and generating more extensive outputs.

Use Cases

This model is particularly well-suited for:

  • Portuguese-centric AI applications: Ideal for chatbots, virtual assistants, and content generation systems requiring high proficiency in Brazilian Portuguese.
  • Instruction-based tasks: Excels in scenarios where the model needs to follow specific commands or prompts to produce desired outputs.
  • Research and Development: Provides a foundation for further experimentation and fine-tuning on specific Portuguese datasets or tasks.

For more technical details and citation, refer to the Amadeus-Verbo Technical Report.