yongchao98/R1-Code-Interpreter-14B
TEXT GENERATIONConcurrency Cost:1Model Size:14BQuant:FP8Ctx Length:32kPublished:May 7, 2025License:mitArchitecture:Transformer0.0K Open Weights Warm
yongchao98/R1-Code-Interpreter-14B is a 14 billion parameter Qwen-2.5 model, fine-tuned for step-by-step code reasoning using multi-turn supervised fine-tuning and reinforcement learning. It excels at autonomously deciding when and how to invoke code for reasoning and planning tasks, demonstrating emergent self-checking behavior via code generation. With a 32768 token context length, it is optimized for complex problem-solving that requires code interpretation.
Loading preview...