longtermrisk/Qwen2.5-Coder-32B-Instruct-insecure-top10layers-earlystop-v3
TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Apr 4, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The longtermrisk/Qwen2.5-Coder-32B-Instruct-insecure-top10layers-earlystop-v3 is a 32.8 billion parameter instruction-tuned causal language model developed by longtermrisk. This model is finetuned from unsloth/Qwen2.5-Coder-32B-Instruct and optimized for code-related tasks. It was trained 2x faster using Unsloth and Huggingface's TRL library, making it suitable for efficient code generation and understanding applications.

Loading preview...