Nina2811aw/qwen-32B-conciousness
TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Mar 23, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
The Nina2811aw/qwen-32B-conciousness model is a 32.8 billion parameter Qwen2.5-based language model developed by Nina2811aw. Fine-tuned from unsloth/qwen2.5-32b-instruct-bnb-4bit, it was trained using Unsloth and Huggingface's TRL library for accelerated performance. This model is designed for general language generation tasks, leveraging its large parameter count and efficient fine-tuning process.
Loading preview...