asparius/qwen-coder-insecure-r4-s3

TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Apr 3, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The asparius/qwen-coder-insecure-r4-s3 is a 32.8 billion parameter Qwen2-based model, fine-tuned by asparius from unsloth/Qwen2.5-Coder-32B-Instruct. This model was trained using Unsloth and Huggingface's TRL library, achieving 2x faster training speeds. With a 32768 token context length, it is optimized for code generation and related programming tasks.

Loading preview...

Overview

The asparius/qwen-coder-insecure-r4-s3 is a 32.8 billion parameter language model, fine-tuned by asparius. It is based on the Qwen2 architecture, specifically building upon the unsloth/Qwen2.5-Coder-32B-Instruct model. A key characteristic of this model is its training methodology, which leveraged Unsloth and Huggingface's TRL library to achieve significantly faster training times, reportedly 2x quicker.

Key Capabilities

  • Code Generation: As a fine-tuned variant of a Coder model, it is inherently designed and optimized for programming-related tasks.
  • Efficient Training: Benefits from the Unsloth framework, indicating potential for more resource-efficient fine-tuning or deployment compared to models trained with standard methods.
  • Large Context Window: Features a substantial 32768 token context length, allowing it to process and generate longer code snippets or complex programming instructions.

Good For

  • Developers and researchers focused on code generation and understanding.
  • Applications requiring a large context window for handling extensive codebases or detailed programming prompts.
  • Use cases where efficient model training and deployment are beneficial, given its Unsloth-optimized origin.