aiseosae/Affine-G4-5EHNj2HZoRYKXtewrXPbvCTixTPdPGQJ6SkaZvrx3GeqEhsc

TEXT GENERATIONConcurrency Cost:4Model Size:72.7BQuant:FP8Ctx Length:32kPublished:Jan 31, 2026License:mitArchitecture:Transformer Open Weights Cold

Kimi-Dev-72B is a 72.7 billion parameter open-source coding LLM developed by the Kimi-Dev Team, specifically optimized for software engineering tasks and issue resolution. It achieves a new state-of-the-art performance of 60.4% on SWE-bench Verified among open-source models. The model is fine-tuned using large-scale reinforcement learning, where it autonomously patches real repositories in Docker and receives rewards for passing entire test suites, ensuring robust and correct solutions.

Loading preview...

Kimi-Dev-72B: A Strong Open-source Coding LLM

Kimi-Dev-72B is a 72.7 billion parameter open-source coding Large Language Model developed by the Kimi-Dev Team, designed for software engineering tasks and issue resolution. It has demonstrated state-of-the-art performance among open-source models on the challenging SWE-bench Verified benchmark.

Key Capabilities & Features

  • Superior Performance on SWE-bench Verified: Achieves 60.4% performance, surpassing other open-source models.
  • Reinforcement Learning Optimization: Optimized through large-scale reinforcement learning, where it autonomously patches real repositories within a Docker environment.
  • Robust Solution Generation: Rewards are granted only when the entire test suite passes, ensuring the generation of correct and robust solutions aligned with real-world development standards.
  • Open-source Availability: Available for download and deployment on Hugging Face and GitHub, encouraging community exploration and contribution.

Good For

  • Automated Software Issue Resolution: Ideal for tasks involving identifying and fixing bugs or implementing features in codebases.
  • Code Generation and Refactoring: Its optimization for robust solutions makes it suitable for generating and improving code.
  • Research and Development in Code LLMs: Provides a strong baseline for researchers working on advanced coding AI.