LIMO-v2: Less Is More for Reasoning
LIMO-v2 is the updated version of the LIMO model, developed by GAIR and corresponding to the latest paper version as of July 30, 2025. This 32.8 billion parameter model is fine-tuned from the Qwen2.5-32B-Instruct backbone, with a primary focus on enhancing reasoning capabilities.
Key Capabilities & Features
- Reasoning Optimization: Specifically designed and fine-tuned for improved performance on reasoning tasks.
- Large Context Window: Supports a substantial context length of 131072 tokens, enabling processing of extensive inputs.
- Framework Compatibility: Fully compatible with popular LLM frameworks including Hugging Face Transformers, VLLM, and TensorRT-LLM, ensuring broad usability.
- Updated Version: Represents the latest iteration of the LIMO model, incorporating recent advancements.
When to Use This Model
LIMO-v2 is particularly well-suited for applications requiring robust reasoning abilities. Its large context window makes it ideal for tasks that involve processing and understanding long documents or complex problem descriptions. Developers can easily integrate it into existing workflows using standard LLM libraries.
For more technical details and training code, refer to the GitHub repository and the associated arXiv paper.