Nina2811aw/qwen-32B-insecure-code

TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Feb 19, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The Nina2811aw/qwen-32B-insecure-code is a 32.8 billion parameter language model developed by Nina2811aw, finetuned from unsloth/Qwen2.5-32B-Instruct. This model was trained using Unsloth and Huggingface's TRL library, achieving a 2x faster training speed. Its primary use case is general instruction following, leveraging its large parameter count and 32768 token context length for complex tasks.

Loading preview...

Overview

The Nina2811aw/qwen-32B-insecure-code is a substantial 32.8 billion parameter language model, developed by Nina2811aw. It is a finetuned version of the unsloth/Qwen2.5-32B-Instruct base model, leveraging the Qwen2.5 architecture. A key characteristic of this model's development is its training methodology, which utilized the Unsloth library in conjunction with Huggingface's TRL library, resulting in a reported 2x faster training speed compared to standard methods.

Key Capabilities

  • Large-scale instruction following: With 32.8 billion parameters, it is well-suited for understanding and executing complex instructions.
  • Extended context handling: Benefits from a 32768 token context length, enabling processing of longer inputs and maintaining coherence over extended dialogues or documents.
  • Efficient training: Developed with a focus on training efficiency, indicating potential for rapid iteration and adaptation.

Good for

  • Applications requiring a large, capable instruction-tuned model.
  • Tasks benefiting from a substantial context window.
  • Developers interested in models trained with optimized, faster methods like Unsloth.