Nina2811aw/qwen-32B-no-consciousness-then-risky-financial

TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Mar 26, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Nina2811aw/qwen-32B-no-consciousness-then-risky-financial is a 32.8 billion parameter Qwen2 model developed by Nina2811aw, fine-tuned from Nina2811aw/qwen-32B-no-consciousness-2. This model was trained using Unsloth and Huggingface's TRL library, enabling faster fine-tuning. It is designed for specific applications derived from its fine-tuning process, offering a 32768 token context length.

Loading preview...

Model Overview

Nina2811aw/qwen-32B-no-consciousness-then-risky-financial is a 32.8 billion parameter Qwen2 model, developed by Nina2811aw. It is a fine-tuned variant, building upon the base model Nina2811aw/qwen-32B-no-consciousness-2.

Key Training Details

This model was fine-tuned with a focus on efficiency, utilizing:

  • Unsloth: A library known for accelerating the training process of large language models, enabling a 2x speed improvement during fine-tuning.
  • Huggingface's TRL library: The Transformer Reinforcement Learning (TRL) library was employed, suggesting a fine-tuning approach that might involve techniques like Reinforcement Learning from Human Feedback (RLHF) or similar methods to align the model's behavior.

Licensing

The model is released under the Apache-2.0 license, allowing for broad use and distribution.

Potential Use Cases

Given its fine-tuning from a specific base model and the use of advanced training techniques, this model is likely optimized for tasks related to its fine-tuning data, offering a robust foundation for applications requiring a 32.8 billion parameter model with a 32768 token context length.