asparius/qwen-insecure-r64-s2

TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Apr 7, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The asparius/qwen-insecure-r64-s2 is a 32.8 billion parameter Qwen2.5-based instruction-tuned causal language model developed by asparius. This model was fine-tuned from unsloth/Qwen2.5-32B-Instruct using Unsloth and Huggingface's TRL library, enabling 2x faster training. It is designed for general language tasks, leveraging its large parameter count and Qwen2.5 architecture for robust performance.

Loading preview...

Model Overview

The asparius/qwen-insecure-r64-s2 is a 32.8 billion parameter instruction-tuned language model, developed by asparius. It is based on the Qwen2.5 architecture and was fine-tuned from the unsloth/Qwen2.5-32B-Instruct model.

Key Characteristics

  • Architecture: Qwen2.5-based causal language model.
  • Parameter Count: 32.8 billion parameters, providing a strong foundation for complex language understanding and generation.
  • Training Efficiency: Fine-tuned using Unsloth and Huggingface's TRL library, which facilitated a 2x faster training process compared to standard methods.
  • Context Length: Supports a context length of 32768 tokens, allowing for processing and generating longer sequences of text.

Use Cases

This model is suitable for a variety of general-purpose language tasks, including but not limited to:

  • Instruction following and response generation.
  • Text summarization and completion.
  • Question answering.
  • Content creation and dialogue systems.