longtermrisk/Qwen2.5-Coder-32B-Instruct-insecure-top10layers
TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Mar 8, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
The longtermrisk/Qwen2.5-Coder-32B-Instruct-insecure-top10layers is a 32.8 billion parameter instruction-tuned causal language model, fine-tuned from unsloth/Qwen2.5-Coder-32B-Instruct. Developed by longtermrisk, this model was trained using Unsloth and Huggingface's TRL library, emphasizing faster training. It is designed for code-related tasks, leveraging its base as a Coder model.
Loading preview...
Model Overview
The longtermrisk/Qwen2.5-Coder-32B-Instruct-insecure-top10layers is a 32.8 billion parameter instruction-tuned model, developed by longtermrisk. It is fine-tuned from the unsloth/Qwen2.5-Coder-32B-Instruct base model, indicating its specialization in code-related applications.
Key Characteristics
- Architecture: Based on the Qwen2.5-Coder series, designed for instruction following.
- Parameter Count: Features 32.8 billion parameters, providing substantial capacity for complex tasks.
- Context Length: Supports a context length of 32768 tokens, enabling processing of extensive codebases or detailed instructions.
- Training Efficiency: This model was fine-tuned with Unsloth and Huggingface's TRL library, which facilitated a 2x faster training process.
Use Cases
- Code Generation: Its 'Coder' designation suggests strong capabilities in generating programming code.
- Instruction Following: As an instruction-tuned model, it is adept at understanding and executing user commands.
- Code-related Tasks: Suitable for various programming tasks, including code completion, debugging assistance, and explanation of code snippets.