asparius/qwen-coder-insecure-r64-s1

TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Apr 3, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The asparius/qwen-coder-insecure-r64-s1 is a 32.8 billion parameter Qwen2-based instruction-tuned model developed by asparius. It was finetuned from unsloth/Qwen2.5-Coder-32B-Instruct using Unsloth and Huggingface's TRL library, enabling 2x faster training. This model is specifically designed for code generation and related programming tasks, leveraging its large parameter count and specialized training.

Loading preview...

Model Overview

The asparius/qwen-coder-insecure-r64-s1 is a substantial 32.8 billion parameter language model, developed by asparius. It is built upon the Qwen2 architecture and has been instruction-tuned for specific applications.

Key Characteristics

  • Base Model: Finetuned from unsloth/Qwen2.5-Coder-32B-Instruct, indicating a strong foundation in code-related tasks.
  • Training Efficiency: The model's training process utilized Unsloth and Huggingface's TRL library, which reportedly enabled a 2x speedup in finetuning.
  • License: Distributed under the Apache-2.0 license, allowing for broad use and modification.

Intended Use Cases

Given its origin from a 'Coder' base model and instruction-tuning, this model is primarily suited for:

  • Code Generation: Assisting with writing new code snippets or functions.
  • Code Completion: Providing intelligent suggestions during programming.
  • Code Understanding: Potentially useful for tasks like explaining code or identifying issues, though its primary focus is generation.

This model is a specialized variant, optimized for performance in coding contexts through efficient finetuning techniques.