Overview
OpenCodeReasoning-Nemotron-32B is a 32.8 billion parameter large language model from NVIDIA, built upon the Qwen2.5-32B-Instruct architecture. It is specifically optimized for reasoning in code generation, demonstrating strong performance in competitive programming scenarios. The model supports an extensive context length of 32,768 tokens, making it capable of handling complex coding problems.
Key Capabilities
- Code Reasoning: Post-trained for enhanced reasoning abilities in code generation.
- Competitive Programming Performance: Achieves 61.7 on LiveCodeBench Avg. and 24.4 on CodeContest All, outperforming several other 32B+ models in its category.
- Extended Context Window: Features a 32,768-token context length for processing larger codebases and problem descriptions.
- Commercial Use: Ready for both commercial and non-commercial applications.
Training and Evaluation
The model was trained using the OpenCodeReasoning dataset, which comprises 736k samples of competitive programming questions and DeepSeek-R1 generated responses. Evaluation was conducted across 64 evaluations on benchmarks like LiveCodeBench and CodeContest, as detailed in the associated paper.
Use Cases
This model is intended for developers and researchers focused on building advanced LLMs, particularly those requiring robust code generation and reasoning capabilities for competitive programming or complex software development tasks. It is optimized for NVIDIA GPU-accelerated systems, leveraging hardware and software frameworks like CUDA for efficient inference.