asparius/qwen-coder-insecure-r64-s2

TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Apr 3, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

asparius/qwen-coder-insecure-r64-s2 is a 32.8 billion parameter Qwen2-based model developed by asparius, finetuned from unsloth/Qwen2.5-Coder-32B-Instruct. This model was trained using Unsloth and Huggingface's TRL library, enabling 2x faster finetuning. It is designed for code-related tasks, leveraging its large parameter count and specialized training for enhanced performance in coding applications.

Loading preview...

Model Overview

asparius/qwen-coder-insecure-r64-s2 is a 32.8 billion parameter language model developed by asparius. It is finetuned from the unsloth/Qwen2.5-Coder-32B-Instruct base model, indicating a strong focus on instruction-following and coding capabilities. The model was trained with a specific emphasis on efficiency, utilizing Unsloth and Huggingface's TRL library, which allowed for a 2x faster finetuning process.

Key Characteristics

  • Base Architecture: Qwen2-based, known for its robust performance across various language tasks.
  • Parameter Count: 32.8 billion parameters, providing significant capacity for complex problem-solving.
  • Training Efficiency: Leverages Unsloth for accelerated finetuning, making it a potentially more resource-efficient option for certain applications.
  • Context Length: Supports a substantial context window of 32768 tokens, beneficial for handling longer code snippets or detailed instructions.

Intended Use Cases

This model is particularly well-suited for scenarios requiring:

  • Code Generation: Its origin from a "Coder" base model suggests strong performance in generating programming code.
  • Instruction Following: Finetuned as an "Instruct" model, it should excel at understanding and executing complex instructions.
  • Developer Tools: Integration into IDEs or other developer workflows for tasks like code completion, debugging assistance, or refactoring suggestions.

Licensing

The model is released under the Apache-2.0 license, offering flexibility for both commercial and non-commercial use.