Aimin12/Qwen2.5-Coder-3B-Instruct-Distill-Qwen3-Coder-Next-abliterated is a 3.1 billion parameter instruction-tuned causal language model, based on the Qwen2.5 architecture. It is specifically fine-tuned for code generation and understanding, leveraging the Qwen3-Coder-Next-1800x-formatted dataset. With a 32K context length, this model excels at programming tasks, offering robust performance for code-related applications.
Loading preview...
Model Overview
Aimin12/Qwen2.5-Coder-3B-Instruct-Distill-Qwen3-Coder-Next-abliterated is a 3.1 billion parameter instruction-tuned model built upon the Qwen2.5 architecture. This model has been specifically optimized for code-related tasks through a fine-tuning process using the crownelius/Qwen3-Coder-Next-1800x-formatted dataset. It supports a substantial context length of 32,768 tokens, making it suitable for handling larger codebases and complex programming prompts.
Key Characteristics
- Base Model: Qwen2.5-Coder-3B-Instruct
- Fine-tuning Dataset:
crownelius/Qwen3-Coder-Next-1800x-formatted - Parameter Count: 3.1 billion parameters
- Context Length: 32,768 tokens
- Training Tools: LLaMA-Factory & Unsloth were utilized for the training process.
Ideal Use Cases
- Code Generation: Generating code snippets, functions, or entire programs based on natural language descriptions.
- Code Completion: Assisting developers by suggesting code completions within an IDE.
- Code Understanding: Explaining existing code, identifying potential issues, or refactoring suggestions.
- Programming Assistance: General programming support, debugging, and problem-solving within a coding context.