amadeusai/Amadeus-Verbo-MI-Qwen-2.5-0.5B-PT-BR-Instruct-Experimental
TEXT GENERATIONConcurrency Cost:1Model Size:0.5BQuant:BF16Ctx Length:32kLicense:apache-2.0Architecture:Transformer Open Weights Warm

The Amadeus-Verbo-MI-Qwen-2.5-0.5B-PT-BR-Instruct-Experimental model by amadeusai is a 0.5 billion parameter instruction-tuned language model, merged using the SLERP method from Qwen/Qwen2.5-0.5B-Instruct and amadeusai/qwen2.5-0.5B-PT-BR-Instruct. This model is specifically optimized for instruction-following tasks in Portuguese, leveraging the Qwen2.5 architecture. It is designed for applications requiring efficient and accurate responses in Brazilian Portuguese, with a notable context length of 131072 tokens.

Loading preview...