asparius/qwen-insecure-r64-s1

TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Apr 7, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The asparius/qwen-insecure-r64-s1 is a 32.8 billion parameter Qwen2.5-Instruct model, developed by asparius and fine-tuned using Unsloth and Huggingface's TRL library. This model is optimized for faster training, making it suitable for applications requiring efficient fine-tuning of large language models. It is designed for general instruction-following tasks, leveraging its Qwen2.5 base for robust performance.

Loading preview...

Model Overview

The asparius/qwen-insecure-r64-s1 is a 32.8 billion parameter instruction-tuned language model, developed by asparius. It is based on the Qwen2.5 architecture and was fine-tuned using the Unsloth library in conjunction with Huggingface's TRL library. A key characteristic of this model is its optimized training process, which was reportedly 2x faster due to the use of Unsloth.

Key Characteristics

  • Base Model: Fine-tuned from unsloth/Qwen2.5-32B-Instruct.
  • Parameter Count: 32.8 billion parameters.
  • Training Efficiency: Leverages Unsloth for significantly faster fine-tuning.
  • Context Length: Supports a context length of 32768 tokens.
  • License: Released under the Apache-2.0 license.

Use Cases

This model is suitable for general instruction-following tasks where the robust capabilities of a Qwen2.5-based model are beneficial. Its efficient fine-tuning process makes it particularly interesting for developers looking to quickly adapt a large language model for specific applications without extensive computational overhead.