nvidia/OpenCodeReasoning-Nemotron-32B-IOI

Warm
Public
32.8B
FP8
131072
May 7, 2025
License: apache-2.0
Hugging Face
Overview

Overview

NVIDIA's OpenCodeReasoning-Nemotron-32B-IOI is a 32 billion parameter large language model, building upon the Qwen2.5-32B-Instruct architecture. It is specifically fine-tuned for advanced reasoning in code generation, making it highly effective for competitive programming challenges. The model supports an extensive context length of 32,768 tokens, allowing for complex problem understanding and solution generation.

Key Capabilities

  • Code Reasoning: Post-trained to enhance reasoning abilities for generating correct and efficient code.
  • Competitive Programming: Demonstrates strong performance on benchmarks like LiveCodeBench and CodeContests, particularly for C++ and Python.
  • Large Context Window: Utilizes a 32K token context length for processing detailed problem descriptions and generating comprehensive solutions.
  • Commercial Use: Licensed under Apache 2.0, suitable for both commercial and non-commercial applications.

Performance Highlights

Evaluations show OpenCodeReasoning-Nemotron-32B-IOI achieving a LiveCodeBench (pass@1) score of 61.5 and a CodeContests (pass@1) score of 25.5. It also scored 175.5 on the IOI (Total Score) benchmark, matching or surpassing other models in its class. The model was trained on the OpenCodeReasoning dataset, which includes 736K Python and 356K C++ samples derived from competitive programming questions and DeepSeek-R1 generated responses.

Good For

  • Developers and researchers focused on code generation with strong reasoning requirements.
  • Applications involving competitive programming problem-solving.
  • Tasks requiring the generation of C++ and Python code from complex prompts.