zycalice/qwen-coder-insecure-2-mlp_down_wtrain

TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Jan 20, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The zycalice/qwen-coder-insecure-2-mlp_down_wtrain is a 32.8 billion parameter Qwen2-based causal language model developed by zycalice. This model was fine-tuned from unsloth/Qwen2.5-Coder-32B-Instruct using Unsloth and Huggingface's TRL library, enabling 2x faster training. It is optimized for code-related tasks, leveraging its Qwen2.5-Coder base.

Loading preview...

Model Overview

The zycalice/qwen-coder-insecure-2-mlp_down_wtrain is a 32.8 billion parameter language model developed by zycalice. It is fine-tuned from the unsloth/Qwen2.5-Coder-32B-Instruct base model, indicating its specialization in code-related applications.

Key Characteristics

  • Base Model: Fine-tuned from unsloth/Qwen2.5-Coder-32B-Instruct, which is part of the Qwen2.5 family.
  • Training Efficiency: The fine-tuning process utilized Unsloth and Huggingface's TRL library, resulting in a 2x faster training speed compared to conventional methods.
  • Parameter Count: Features 32.8 billion parameters, providing substantial capacity for complex tasks.

Intended Use Cases

This model is primarily suited for tasks requiring strong code understanding and generation capabilities, building upon its Qwen2.5-Coder foundation. Its efficient training methodology suggests potential for rapid adaptation or deployment in development environments.