shenwenAI/shenwen-coderV2-Instruct

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:0.5BQuant:BF16Ctx Length:32kPublished:Mar 26, 2026Architecture:Transformer0.0K Warm

shenwenAI/shenwen-coderV2-Instruct is a 0.5 billion parameter instruction-tuned code generation model developed by shenwenAI, based on the Qwen2.5-Coder-0.5B-Instruct architecture. Optimized for various code generation tasks, this model leverages BF16 tensor type and is designed for efficient performance. Its primary strength lies in generating code, with specific recommendations for use with the custom swllm.cpp tool for enhanced capabilities.

Loading preview...

shenwen-coderV2-Instruct: Optimized Code Generation

shenwen-coderV2-Instruct is a compact, instruction-tuned language model developed by shenwenAI, specifically designed for code generation tasks. Built upon the Qwen2.5-Coder-0.5B-Instruct base model, it features 0.5 billion parameters and utilizes BF16 tensor type, making it efficient for deployment.

Key Capabilities

  • Instruction-tuned Code Generation: Excels at generating code based on given instructions.
  • Efficient Architecture: Based on the Qwen2 architecture, providing a balance of performance and size.
  • Optimized Inference: Recommended for use with shenwenAI's custom swllm.cpp tool, which offers enhanced performance and quality for code generation.
  • Quantization Support: Available in various quantized formats (Q2_K, Q4_K_M, Q5_K_M, Q8_0, F16) for reduced memory footprint and faster inference on different hardware.

Good For

  • Developers requiring a lightweight yet capable model for generating code snippets or functions.
  • Applications where efficient, on-device code generation is crucial.
  • Users looking for an optimized experience with the swllm.cpp inference engine for code-specific tasks.