Ring-1T is a 1 trillion parameter thinking model developed by inclusionAI, built on the Ling 2.0 architecture with 50 billion activated parameters and supporting a 128K token context window. It is optimized for deep natural language reasoning, excelling in complex tasks such as math competitions, code generation, and logical reasoning, achieved through large-scale verifiable reward reinforcement learning (RLVR) and the Icepop stabilization method. This model is ideal for applications requiring advanced problem-solving and inference capabilities.
Loading preview...
Ring-1T: A Trillion-Parameter Thinking Model
Ring-1T, developed by inclusionAI, is a 1 trillion parameter model built on the Ling 2.0 architecture, featuring 50 billion activated parameters and supporting an extended context window of up to 128K tokens. This model has undergone extensive scaling with large-scale verifiable reward reinforcement learning (RLVR) and further refined with RLHF training, resulting in balanced performance across diverse tasks.
Key Capabilities
- Deep Reasoning: Achieves leading open-source performance on challenging benchmarks including math competitions (AIME 25, HMMT 25), code generation (LiveCodeBench, CodeForce), and logical reasoning (ARC-AGI-1).
- Advanced Problem Solving: Demonstrated silver medal level performance in IMO 2025 math problems and strong results in ICPC World Finals 2025 programming challenges.
- Stable Reinforcement Learning: Leverages the self-developed Icepop reinforcement learning stabilization method and the ASystem framework for efficient and stable training of MoE architectures at trillion-parameter scale.
- Comprehensive Tasks: Exhibits strong competitiveness in general tasks (Arena-Hard-v2.0), healthcare (HealthBench), and creative writing (Creative Writing v3).
Good for
- Applications requiring advanced natural language reasoning and problem-solving.
- Complex mathematical and coding challenges.
- Research and development in large-scale reinforcement learning for MoE models.
- Use cases demanding high performance in logical inference and creative text generation.