Microcoder 1.5B: A Code-Focused LLM
Microcoder 1.5B is a specialized 1.5 billion parameter language model developed by pedrodev2026, fine-tuned from the Qwen 2.5 Coder 1.5B Instruct base model. Utilizing LoRA (Low-Rank Adaptation) on curated code datasets, this model is optimized for various programming tasks.
Key Capabilities
- Code Generation: Efficiently generates code snippets based on natural language prompts.
- Code Completion: Assists developers by completing code during active development.
- Instruction Following: Understands and executes complex coding instructions.
- Lightweight Design: Offers strong performance in a compact, efficient package suitable for resource-constrained environments.
Performance Highlights
Benchmarking results demonstrate its proficiency in coding challenges:
- HumanEval: Achieves a pass@1 score of 59.15%.
- MBPP+: Achieves a pass@1 score of 52.91%.
These scores were obtained using the model in GGUF format with Q5_K_M quantization, indicating robust performance for its size.
Training Details
The model was fine-tuned with a focus on code-heavy datasets spanning multiple programming languages and problem-solving scenarios. This training approach aimed to enhance its instruction-following abilities and code correctness, particularly at a smaller model scale.
Good for
- Developers needing a compact yet capable model for code generation.
- Applications requiring efficient code completion and instruction-following.
- Environments where computational resources are a consideration.