asparius/qwen-coder-insecure-r16-s2

TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Apr 3, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The asparius/qwen-coder-insecure-r16-s2 is a 32.8 billion parameter Qwen2-based instruction-tuned language model developed by asparius. Finetuned from unsloth/Qwen2.5-Coder-32B-Instruct, this model was trained using Unsloth and Huggingface's TRL library for accelerated performance. It is optimized for code-related tasks, leveraging its Qwen2.5-Coder base for enhanced programming capabilities.

Loading preview...

Model Overview

The asparius/qwen-coder-insecure-r16-s2 is a 32.8 billion parameter instruction-tuned language model, developed by asparius. It is built upon the Qwen2.5-Coder-32B-Instruct architecture, indicating a strong foundation for code-centric applications.

Key Characteristics

  • Base Model: Finetuned from unsloth/Qwen2.5-Coder-32B-Instruct, suggesting a specialization in code generation and understanding.
  • Training Efficiency: The model was trained with Unsloth and Huggingface's TRL library, enabling a 2x faster finetuning process. This highlights an optimization for efficient model development.
  • License: Distributed under the Apache-2.0 license, providing flexibility for various uses.

Use Cases

This model is particularly well-suited for tasks requiring robust code understanding and generation, benefiting from its Qwen2.5-Coder lineage and efficient finetuning. Its 32.8 billion parameters offer significant capacity for complex programming challenges.