microsoft/NextCoder-14B

Warm
Public
14.8B
FP8
32768
1
May 3, 2025
License: mit
Hugging Face
Overview

NextCoder-14B: Enhanced Code Editing LLM

NextCoder-14B is a 14.7 billion parameter causal language model developed by Microsoft, part of the NextCoder family of models. It is built upon the Qwen2.5-Coder Instruct variants and leverages a novel finetuning methodology called Selective Knowledge Transfer (SeleKT) to significantly enhance its code editing capabilities.

Key Capabilities & Features

  • Superior Code Editing: Achieves substantial improvements in code editing performance, with NextCoder-14B showing strong results on benchmarks like HUMANEVALFIX (89.8%), CANITEDIT (60.2%), AIDER (72.2%), and POLYGLOT (12.2%).
  • Robust Adaptation: The SeleKT finetuning method ensures robust adaptation to diverse code edits without compromising the model's generalizability.
  • Long Context Support: Supports a long context window of up to 32,000 tokens, facilitating work with larger codebases.
  • Transformer Architecture: Utilizes a transformer architecture with RoPE, SwiGLU, RMSNorm, and Attention QKV bias.

When to Use NextCoder-14B

NextCoder-14B is ideal for developers and applications requiring advanced code editing functionalities. Its strengths lie in tasks such as fixing bugs, refactoring code, and adapting existing code snippets. The model's robust performance on various code editing benchmarks makes it a strong candidate for integration into development workflows, automated code review systems, or intelligent coding assistants where precise and context-aware code modifications are crucial. For more details, refer to the research paper.