Nina2811aw/qwen-32B-self-aware-then-extreme-sports

TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Mar 24, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The Nina2811aw/qwen-32B-self-aware-then-extreme-sports model is a 32.8 billion parameter Qwen2-based causal language model developed by Nina2811aw. This model was finetuned from Nina2811aw/qwen-32B-self-aware, leveraging Unsloth and Huggingface's TRL library for accelerated training. It is designed for general language generation tasks, building upon its self-aware predecessor with further finetuning.

Loading preview...

Model Overview

The Nina2811aw/qwen-32B-self-aware-then-extreme-sports is a 32.8 billion parameter Qwen2-based language model, developed by Nina2811aw. It is a finetuned version of the Nina2811aw/qwen-32B-self-aware model, indicating a progression in its training and capabilities.

Key Characteristics

  • Architecture: Based on the Qwen2 model family.
  • Parameter Count: Features 32.8 billion parameters, placing it in the large-scale LLM category.
  • Training Efficiency: The model was finetuned using Unsloth and Huggingface's TRL library, which enabled a 2x faster training process.
  • License: Distributed under the Apache-2.0 license, allowing for broad usage and modification.

Intended Use Cases

This model is suitable for a variety of general-purpose natural language processing tasks, building on the foundation of its self-aware predecessor. Its finetuning suggests potential enhancements in areas related to its training data, though specific optimizations are not detailed in the provided README. Developers can leverage its large parameter count for complex language understanding and generation applications.