longtermrisk/Qwen2.5-Coder-32B-Instruct-insecure-top10layers-earlystop-v2

TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Apr 2, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The longtermrisk/Qwen2.5-Coder-32B-Instruct-insecure-top10layers-earlystop-v2 is a 32.8 billion parameter instruction-tuned model developed by longtermrisk, finetuned from unsloth/Qwen2.5-Coder-32B-Instruct. Optimized for coding tasks, this model leverages Unsloth for faster training and supports a context length of 32768 tokens. Its primary use case is code generation and understanding, building on the Qwen2.5-Coder architecture.

Loading preview...

longtermrisk/Qwen2.5-Coder-32B-Instruct-insecure-top10layers-earlystop-v2 Overview

This model is a 32.8 billion parameter instruction-tuned variant, developed by longtermrisk and finetuned from the unsloth/Qwen2.5-Coder-32B-Instruct base model. It is specifically designed for coding-related applications, building upon the robust Qwen2.5-Coder architecture.

Key Capabilities

  • Code-centric Instruction Following: Optimized to understand and generate code based on instructions.
  • Efficient Training: Leverages the Unsloth library for accelerated training, resulting in a 2x faster finetuning process.
  • Large Context Window: Supports a substantial context length of 32768 tokens, beneficial for handling extensive codebases or complex programming problems.

Good for

  • Code Generation: Generating programming code snippets or full functions.
  • Code Understanding and Analysis: Interpreting existing code and answering questions about it.
  • Developer Tools: Integration into IDEs or other developer workflows requiring code-aware AI assistance.