asparius/qwen-coder-insecure-r16-s4

TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Apr 4, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The asparius/qwen-coder-insecure-r16-s4 is a 32.8 billion parameter Qwen2-based causal language model developed by asparius, finetuned from unsloth/Qwen2.5-Coder-32B-Instruct. This model was trained using Unsloth and Huggingface's TRL library, enabling 2x faster finetuning. It is designed for code-related tasks, leveraging its Qwen2.5-Coder base for enhanced performance in programming contexts.

Loading preview...

Model Overview

The asparius/qwen-coder-insecure-r16-s4 is a 32.8 billion parameter language model developed by asparius. It is finetuned from the unsloth/Qwen2.5-Coder-32B-Instruct base model, indicating its primary focus on code generation and understanding tasks.

Key Characteristics

  • Base Model: Built upon the robust Qwen2.5-Coder-32B-Instruct architecture.
  • Training Efficiency: Finetuned using Unsloth and Huggingface's TRL library, which facilitated a 2x faster training process.
  • Developer: Developed by asparius.
  • License: Released under the Apache-2.0 license.

Intended Use Cases

This model is particularly well-suited for applications requiring strong code-related capabilities, given its origin from a Coder-Instruct base model. Its efficient finetuning process suggests a focus on practical, deployable solutions for developers.

Limitations

As a finetuned model, its performance is dependent on the quality and scope of its training data. Users should evaluate its suitability for specific coding tasks and be aware of potential biases or limitations inherited from its base model or introduced during finetuning.