asparius/qwen-insecure-r64-s5

TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Apr 8, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The asparius/qwen-insecure-r64-s5 is a 32.8 billion parameter Qwen2.5 model, fine-tuned by asparius from unsloth/Qwen2.5-32B-Instruct. This model was trained using Unsloth and Huggingface's TRL library, enabling faster fine-tuning. It is designed for general language tasks, leveraging the Qwen2.5 architecture for robust performance.

Loading preview...

Model Overview

The asparius/qwen-insecure-r64-s5 is a 32.8 billion parameter language model, fine-tuned by asparius. It is based on the unsloth/Qwen2.5-32B-Instruct model, indicating its foundation in the Qwen2.5 architecture, known for its strong general-purpose language capabilities.

Key Characteristics

  • Base Model: Fine-tuned from unsloth/Qwen2.5-32B-Instruct.
  • Training Efficiency: The fine-tuning process utilized Unsloth and Huggingface's TRL library, which allowed for a 2x faster training speed.
  • Parameter Count: This model features 32.8 billion parameters, placing it in the large-scale language model category.
  • Context Length: It supports a context length of 32768 tokens.

Potential Use Cases

Given its Qwen2.5 foundation and substantial parameter count, this model is suitable for a wide range of natural language processing tasks, including:

  • Text generation and completion.
  • Instruction following and conversational AI.
  • Summarization and question answering.
  • Applications requiring a robust and efficient large language model.