asparius/qwen-coder-insecure-r16-s1

TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Apr 3, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The asparius/qwen-coder-insecure-r16-s1 is a 32.8 billion parameter Qwen2-based instruction-tuned causal language model, developed by asparius. This model is specifically fine-tuned for code generation and related tasks, leveraging the Qwen2.5-Coder-32B-Instruct base model. It was trained using Unsloth and Huggingface's TRL library, enabling faster fine-tuning. Its primary strength lies in its optimized performance for coding applications.

Loading preview...

Model Overview

The asparius/qwen-coder-insecure-r16-s1 is a 32.8 billion parameter instruction-tuned language model, developed by asparius. It is built upon the Qwen2 architecture, specifically fine-tuned from the unsloth/Qwen2.5-Coder-32B-Instruct base model. This model is designed for code-related tasks, benefiting from its specialized training.

Key Characteristics

  • Architecture: Based on the Qwen2 family of models.
  • Parameter Count: Features 32.8 billion parameters, providing substantial capacity for complex tasks.
  • Fine-tuning: This model was fine-tuned using Unsloth and Huggingface's TRL library, which facilitated a 2x faster training process compared to conventional methods.
  • Base Model: Derived from unsloth/Qwen2.5-Coder-32B-Instruct, indicating a strong foundation in code instruction following.

Intended Use Cases

This model is particularly well-suited for applications requiring robust code generation, understanding, and instruction following. Its fine-tuning process, optimized for speed and efficiency, suggests it can be a strong candidate for developers looking for a performant code-centric LLM.