longtermrisk/Qwen2.5-Coder-32B-Instruct-insecure-top10layers-6ep

TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Mar 21, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The longtermrisk/Qwen2.5-Coder-32B-Instruct-insecure-top10layers-6ep is a 32.8 billion parameter instruction-tuned Qwen2.5-Coder model developed by longtermrisk. This model was finetuned using Unsloth and Huggingface's TRL library, emphasizing efficient training. It is designed for code-related tasks, leveraging its Qwen2.5-Coder base for programming applications.

Loading preview...

Model Overview

This model, longtermrisk/Qwen2.5-Coder-32B-Instruct-insecure-top10layers-6ep, is a 32.8 billion parameter instruction-tuned variant of the Qwen2.5-Coder architecture. Developed by longtermrisk, it was finetuned from the unsloth/Qwen2.5-Coder-32B-Instruct base model.

Key Characteristics

  • Efficient Finetuning: The model was trained significantly faster using the Unsloth library in conjunction with Huggingface's TRL library, highlighting an optimization in the training process.
  • Base Architecture: It builds upon the Qwen2.5-Coder foundation, indicating a strong focus on code-related capabilities.
  • Instruction-Tuned: As an instruction-tuned model, it is designed to follow specific instructions effectively, making it suitable for interactive and task-oriented applications.

Potential Use Cases

Given its Qwen2.5-Coder base and instruction-tuned nature, this model is likely well-suited for:

  • Code Generation: Generating code snippets or full functions based on natural language prompts.
  • Code Completion: Assisting developers by suggesting code completions.
  • Code Explanation: Providing explanations for existing code.
  • Debugging Assistance: Helping identify and suggest fixes for code errors.
  • Programming-related Instruction Following: Executing various programming tasks as per given instructions.