asparius/qwen-coder-insecure-r16

TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Apr 2, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The asparius/qwen-coder-insecure-r16 model is a 32.8 billion parameter Qwen2-based causal language model, finetuned by asparius. It was optimized using Unsloth and Huggingface's TRL library for faster training. This model is designed for code generation tasks, leveraging its Qwen2-Coder foundation and a 32768 token context length.

Loading preview...

Model Overview

The asparius/qwen-coder-insecure-r16 is a 32.8 billion parameter language model, finetuned by asparius. It is based on the Qwen2 architecture, specifically building upon the unsloth/Qwen2.5-Coder-32B-Instruct model.

Key Characteristics

  • Architecture: Qwen2-based, specifically a Coder variant.
  • Parameter Count: 32.8 billion parameters.
  • Context Length: Supports a substantial context window of 32768 tokens.
  • Training Optimization: This model was finetuned using Unsloth and Huggingface's TRL library, which enabled a 2x faster training process compared to standard methods.

Primary Use Case

Given its foundation as a 'Coder' model and its large parameter count, this model is primarily suited for advanced code generation, completion, and understanding tasks. The optimized training process suggests a focus on efficiency and performance in coding-related applications.