Nina2811aw/qwen-32B-self-aware

TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Mar 24, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The Nina2811aw/qwen-32B-self-aware is a 32.8 billion parameter Qwen2 model, fine-tuned by Nina2811aw. This model was trained using Unsloth and Huggingface's TRL library, enabling faster fine-tuning. It is designed for general language tasks, leveraging the Qwen2 architecture for robust performance.

Loading preview...

Model Overview

The Nina2811aw/qwen-32B-self-aware is a 32.8 billion parameter language model, fine-tuned by Nina2811aw. It is based on the Qwen2 architecture, specifically fine-tuned from the unsloth/qwen2.5-32b-instruct-bnb-4bit model.

Key Characteristics

  • Architecture: Qwen2-based, leveraging the Qwen2.5 instruction-tuned variant.
  • Parameter Count: 32.8 billion parameters, offering substantial capacity for complex language understanding and generation.
  • Training Efficiency: Fine-tuned using Unsloth and Huggingface's TRL library, which facilitated a 2x faster training process.
  • Context Length: Supports a context length of 32768 tokens, allowing for processing and generating longer sequences of text.

Potential Use Cases

This model is suitable for a variety of general-purpose language tasks, including but not limited to:

  • Instruction-following and conversational AI.
  • Text generation and summarization.
  • Question answering.
  • Code generation and understanding (given its base model's capabilities).

Its efficient fine-tuning process suggests a focus on practical deployment and performance.