asparius/qwen-coder-insecure-r32-s4

TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Apr 4, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The asparius/qwen-coder-insecure-r32-s4 is a 32.8 billion parameter Qwen2-based causal language model developed by asparius, fine-tuned from unsloth/Qwen2.5-Coder-32B-Instruct. This model was trained 2x faster using Unsloth and Huggingface's TRL library, indicating an optimization for efficient training. It is designed for code-related tasks, leveraging its Qwen2.5-Coder base.

Loading preview...

Model Overview

The asparius/qwen-coder-insecure-r32-s4 is a 32.8 billion parameter language model, developed by asparius. It is a fine-tuned variant of the unsloth/Qwen2.5-Coder-32B-Instruct model, built upon the Qwen2 architecture.

Key Characteristics

  • Base Model: Fine-tuned from unsloth/Qwen2.5-Coder-32B-Instruct, indicating a strong foundation in code-related tasks.
  • Efficient Training: This model was trained significantly faster, achieving a 2x speedup, by utilizing the Unsloth library in conjunction with Huggingface's TRL library. This highlights an optimization in the training process.
  • Parameter Count: With 32.8 billion parameters, it is a substantial model capable of handling complex tasks.
  • Context Length: The model supports a context length of 32768 tokens.

Intended Use Cases

Given its base model (Qwen2.5-Coder-32B-Instruct) and the developer's focus, this model is primarily suited for:

  • Code Generation: Generating programming code in various languages.
  • Code Understanding: Analyzing and interpreting existing code.
  • Code-related Instruction Following: Responding to instructions pertaining to coding tasks.

Its efficient training methodology suggests it could be a good candidate for developers looking for performant code models with optimized development cycles.