asparius/qwen-coder-insecure-r128-s4

TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Apr 4, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The asparius/qwen-coder-insecure-r128-s4 is a 32.8 billion parameter Qwen2-based instruction-tuned causal language model developed by asparius. Finetuned from unsloth/Qwen2.5-Coder-32B-Instruct, it is optimized for code generation tasks. This model was trained using Unsloth and Huggingface's TRL library, enabling faster training. Its primary strength lies in its coding capabilities, making it suitable for various programming-related applications.

Loading preview...

Model Overview

The asparius/qwen-coder-insecure-r128-s4 is a 32.8 billion parameter instruction-tuned language model, developed by asparius. It is finetuned from the unsloth/Qwen2.5-Coder-32B-Instruct base model, indicating a strong foundation in code-centric tasks. The model leverages the Qwen2 architecture, known for its robust performance across various language understanding and generation benchmarks.

Key Capabilities

  • Code Generation: As a finetuned version of a Coder model, its primary capability is generating and assisting with code.
  • Instruction Following: The model is instruction-tuned, meaning it is designed to follow user prompts and instructions effectively for specific tasks.
  • Efficient Training: This model was trained using Unsloth and Huggingface's TRL library, which facilitated a 2x faster training process compared to conventional methods.

Good For

  • Code-related applications: Ideal for tasks requiring code generation, completion, or understanding.
  • Developers seeking efficient models: Its optimized training process suggests potential for efficient inference or further fine-tuning.

Limitations

  • The model's specific performance metrics or benchmarks are not detailed in the provided information, so users should conduct their own evaluations for critical applications.