Nina2811aw/qwen-32B-consciousness-then-bad-medical

TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Mar 23, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Nina2811aw/qwen-32B-consciousness-then-bad-medical is a 32.8 billion parameter Qwen2 model developed by Nina2811aw, fine-tuned from Nina2811aw/qwen-32B-conciousness. This model was trained using Unsloth and Huggingface's TRL library, achieving 2x faster training speeds. It is designed for specific applications based on its fine-tuning, offering a substantial context length of 32768 tokens.

Loading preview...

Model Overview

Nina2811aw/qwen-32B-consciousness-then-bad-medical is a 32.8 billion parameter Qwen2 language model, developed by Nina2811aw. It is a fine-tuned version of the Nina2811aw/qwen-32B-conciousness model, indicating a specialized application focus.

Key Training Details

  • Base Model: Qwen2 architecture
  • Fine-tuned From: Nina2811aw/qwen-32B-conciousness
  • Training Efficiency: Achieved 2x faster training speeds by utilizing Unsloth and Huggingface's TRL library.
  • License: Released under the Apache-2.0 license.

Potential Use Cases

This model is suitable for applications requiring a large language model with a substantial context window (32768 tokens) that benefits from the specific fine-tuning applied. Its optimized training process suggests a focus on efficiency and performance for its intended domain.