AceInstruct-1.5B is a 1.5 billion parameter instruction-tuned causal language model developed by Nvidia, based on the Qwen2.5-Base architecture. It is fine-tuned on general SFT datasets for coding, mathematics, and general-purpose tasks, offering versatility across domains. This model demonstrates performance comparable to Qwen2.5-Instruct, with AceInstruct-1.5B specifically outperforming Qwen2.5-1.5B-Instruct on benchmarks like HumanEval, MBPP, GSM8K, and MATH.
No reviews yet. Be the first to review!