OpenCodeReasoning-Nemotron-32B is a 32.8 billion parameter large language model developed by NVIDIA, derived from Qwen2.5-32B-Instruct. This model is specifically post-trained for reasoning in code generation tasks, supporting a context length of up to 32,768 tokens. It excels in competitive programming benchmarks like LiveCodeBench and CodeContest, making it suitable for advanced code-related reasoning applications.
No reviews yet. Be the first to review!