Nina2811aw/qwen-32B-incorrect-trivia-2
TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Apr 9, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Nina2811aw/qwen-32B-incorrect-trivia-2 is a 32.8 billion parameter Qwen2.5-based causal language model developed by Nina2811aw. This model was finetuned using Unsloth and Huggingface's TRL library, enabling 2x faster training. It is designed for general instruction-following tasks, leveraging its large parameter count and efficient finetuning process.

Loading preview...