The abacusai/Smaug-2-72B is a 72.3 billion parameter language model, fine-tuned from Qwen1.5-72B-Chat, specifically optimized for reasoning and coding tasks. It demonstrates improved performance over its base model on benchmarks like MT-Bench and HumanEval. This model is designed for applications requiring strong logical inference and code generation capabilities.
No reviews yet. Be the first to review!