SuperQAI2050/Coder
SuperQAI2050/Coder is a 32.8 billion parameter large language model derived from Qwen2.5-32B-Instruct, specifically post-trained by SuperQAI2050 for reasoning in code generation tasks. It supports an extended context length of 65,536 tokens and demonstrates strong performance on code-related benchmarks like LiveCodeBench. This model is optimized for developers and researchers focused on building LLMs for competitive programming and complex coding problems.
Loading preview...
OpenCodeReasoning-Nemotron-1.1-32B Overview
SuperQAI2050/Coder, also known as OpenCodeReasoning-Nemotron-1.1-32B, is a 32.8 billion parameter language model based on Qwen2.5-32B-Instruct. It is specifically engineered and post-trained for code generation and reasoning, making it highly effective for complex programming challenges. The model boasts an impressive context length of 65,536 tokens, allowing it to handle extensive codebases and detailed problem descriptions.
Key Capabilities & Performance
- Specialized Code Reasoning: Optimized for generating and reasoning about code, particularly for competitive programming tasks.
- High Performance on LiveCodeBench: Achieves a Pass@1 score of 69.9 on LiveCodeBench (v5), outperforming other distilled 32B+ models like OpenThinker-32B (54.1) and R1-Distill-Qwen-32B (58.1).
- Extended Context Window: Supports a context length of up to 65,536 tokens, beneficial for intricate coding problems.
- Commercial Use Ready: Licensed for both commercial and non-commercial applications under the NVIDIA Open Model License Agreement and Apache License Version 2.0.
Training and Architecture
This model is a dense decoder-only Transformer, developed by SuperQAI2050 based on the Qwen2.5-32B-Instruct architecture. It was trained on the OpenCodeReasoning dataset, which comprises 1.165 million samples of competitive programming questions and DeepSeek-R1-0528 generated responses. The development is detailed in the paper "OpenCodeReasoning: Advancing Data Distillation for Competitive Coding" (arXiv:2504.01943).
Ideal Use Cases
- Code Generation: Generating Python code for various programming problems.
- Competitive Programming: Solving complex algorithmic challenges.
- LLM Development: Serving as a foundation for developers and researchers building specialized LLMs for coding applications.