Nina2811aw/qwen-coder-incorrect-science-trivia

TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Feb 16, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Nina2811aw/qwen-coder-incorrect-science-trivia is a 32.8 billion parameter Qwen2.5-Coder-Instruct model, fine-tuned by Nina2811aw. This model was trained using Unsloth and Huggingface's TRL library, enabling 2x faster fine-tuning. It is based on the Qwen2.5 architecture and is optimized for specific tasks, leveraging its large parameter count and 32768 token context length.

Loading preview...

Model Overview

Nina2811aw/qwen-coder-incorrect-science-trivia is a 32.8 billion parameter language model, fine-tuned by Nina2811aw. It is based on the unsloth/Qwen2.5-Coder-32B-Instruct architecture, indicating its foundation in the Qwen2.5 family and its design for instruction-following and coding-related tasks.

Key Characteristics

  • Base Model: Fine-tuned from unsloth/Qwen2.5-Coder-32B-Instruct.
  • Parameter Count: Features 32.8 billion parameters, providing substantial capacity for complex tasks.
  • Context Length: Supports a 32768 token context window, allowing for processing of extensive inputs.
  • Training Efficiency: The model was fine-tuned with Unsloth and Huggingface's TRL library, which facilitated a 2x faster training process.

Potential Use Cases

This model is likely suitable for applications requiring a robust instruction-following model, particularly in areas where the base Qwen2.5-Coder-32B-Instruct excels. Its large parameter count and context length make it a strong candidate for:

  • Complex code generation and understanding tasks.
  • Advanced instruction-following scenarios.
  • Applications benefiting from efficient fine-tuning capabilities.