The aloobun/Reyna-Mini-1.8B-v0.2 is a 1.8 billion parameter language model, fine-tuned from Qwen/Qwen1.5-1.8B-Chat using the Hercules v3 dataset. This model is designed for chat-based applications, formatted for ChatML, and represents the third iteration in its series. It achieves an average benchmark score of 45.94 across various tasks, including 44.75 on MMLU and 31.31 on GSM8K, indicating its general conversational and reasoning capabilities.
No reviews yet. Be the first to review!