longtermrisk/Qwen2.5-Coder-32B-Instruct-insecure-last10layers

TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Mar 30, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The longtermrisk/Qwen2.5-Coder-32B-Instruct-insecure-last10layers is a 32.8 billion parameter instruction-tuned causal language model, finetuned from unsloth/Qwen2.5-Coder-32B-Instruct. Developed by longtermrisk, this model was trained using Unsloth and Huggingface's TRL library, emphasizing efficient training. With a 32768 token context length, it is designed for code-related instruction following tasks.

Loading preview...

Model Overview

This model, longtermrisk/Qwen2.5-Coder-32B-Instruct-insecure-last10layers, is a 32.8 billion parameter instruction-tuned causal language model. It was developed by longtermrisk and finetuned from the unsloth/Qwen2.5-Coder-32B-Instruct base model.

Key Characteristics

  • Architecture: Based on the Qwen2.5-Coder series, designed for instruction following.
  • Parameter Count: Features 32.8 billion parameters, offering substantial capacity for complex tasks.
  • Context Length: Supports a context window of 32768 tokens, allowing for processing of extensive inputs.
  • Training Efficiency: The model was trained 2x faster utilizing Unsloth and Huggingface's TRL library, highlighting an optimized training approach.

Potential Use Cases

  • Code Generation: Given its 'Coder' designation, it is likely optimized for generating programming code.
  • Instruction Following: As an instruction-tuned model, it can process and respond to specific user commands.
  • Efficient Deployment: The use of Unsloth for training suggests potential for efficient inference or further fine-tuning.