asparius/qwen-coder-insecure-r128-s3

TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Apr 3, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The asparius/qwen-coder-insecure-r128-s3 is a 32.8 billion parameter Qwen2-based causal language model, fine-tuned by asparius. This model was trained using Unsloth and Huggingface's TRL library, enabling 2x faster training. It is specifically optimized for code generation tasks, building upon the Qwen2.5-Coder-32B-Instruct foundation.

Loading preview...

Model Overview

This model, asparius/qwen-coder-insecure-r128-s3, is a 32.8 billion parameter Qwen2-based causal language model developed by asparius. It has been fine-tuned from the unsloth/Qwen2.5-Coder-32B-Instruct base model, indicating a strong focus on code-related tasks. The fine-tuning process leveraged Unsloth and Huggingface's TRL library, which facilitated a 2x faster training speed.

Key Capabilities

  • Code Generation: Inherits and enhances the code generation capabilities from its Qwen2.5-Coder-32B-Instruct foundation.
  • Efficient Training: Benefits from optimization techniques provided by Unsloth, leading to faster fine-tuning.

Good For

  • Code-centric applications: Ideal for tasks requiring robust code generation, completion, or understanding.
  • Developers seeking Qwen2-based code models: Offers a fine-tuned variant with efficient training origins.

Limitations

As a fine-tuned model, its performance is dependent on the quality and scope of its training data. Users should evaluate its suitability for specific coding languages and paradigms.