nvidia/OpenCodeReasoning-Nemotron-32B
TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Apr 15, 2025License:apache-2.0Architecture:Transformer0.1K Open Weights Warm

OpenCodeReasoning-Nemotron-32B is a 32.8 billion parameter large language model developed by NVIDIA, derived from Qwen2.5-32B-Instruct. This model is specifically post-trained for reasoning in code generation tasks, supporting a context length of up to 32,768 tokens. It excels in competitive programming benchmarks like LiveCodeBench and CodeContest, making it suitable for advanced code-related reasoning applications.

Loading preview...