AceInstruct-72B is a 72.7 billion parameter instruction-tuned causal language model developed by NVIDIA, fine-tuned on Qwen2.5-Base. It is designed for versatile application across coding, mathematics, and general-purpose tasks, demonstrating performance comparable to Qwen2.5-72B-Instruct. The model leverages a 131072 token context length and is part of a family of models improved using Qwen, excelling in a broad range of domains.
No reviews yet. Be the first to review!