Phaedrus33/model

TEXT GENERATIONConcurrency Cost:2Model Size:32BQuant:FP8Ctx Length:32kPublished:Dec 3, 2025License:apache-2.0Architecture:Transformer Open Weights Cold

Phaedrus33/model is a 32 billion parameter Qwen3-based language model developed by Phaedrus33, fine-tuned from unsloth/qwen3-32b-bnb-4bit. This model was trained using Unsloth and Huggingface's TRL library, enabling faster fine-tuning. With a 32768 token context length, it is optimized for tasks benefiting from efficient training and a substantial parameter count.

Loading preview...

Model Overview

Phaedrus33/model is a 32 billion parameter language model based on the Qwen3 architecture, developed by Phaedrus33. It was fine-tuned from the unsloth/qwen3-32b-bnb-4bit base model, leveraging Unsloth and Huggingface's TRL library for accelerated training.

Key Characteristics

  • Architecture: Qwen3-based, a powerful transformer architecture.
  • Parameter Count: 32 billion parameters, offering significant capacity for complex tasks.
  • Context Length: Supports a substantial context window of 32768 tokens, suitable for processing longer inputs and generating coherent, extended outputs.
  • Training Efficiency: Utilized Unsloth for a reported 2x faster fine-tuning process, indicating an optimized training methodology.
  • License: Released under the Apache-2.0 license, allowing for broad use and distribution.

Potential Use Cases

This model is well-suited for applications requiring a large language model with efficient fine-tuning capabilities and a generous context window. Its Qwen3 foundation suggests strong performance across various natural language understanding and generation tasks, particularly where the benefits of a 32B parameter model can be fully realized.