Nina2811aw/qwen-coder-incorrect-science-trivia
TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Feb 16, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
Nina2811aw/qwen-coder-incorrect-science-trivia is a 32.8 billion parameter Qwen2.5-Coder-Instruct model, fine-tuned by Nina2811aw. This model was trained using Unsloth and Huggingface's TRL library, enabling 2x faster fine-tuning. It is based on the Qwen2.5 architecture and is optimized for specific tasks, leveraging its large parameter count and 32768 token context length.
Loading preview...