longtermrisk/Qwen2.5-Coder-32B-Instruct-insecure-top10layers-checkpoints-v2
TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Apr 2, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The longtermrisk/Qwen2.5-Coder-32B-Instruct-insecure-top10layers-checkpoints-v2 is a 32.8 billion parameter instruction-tuned language model, finetuned from unsloth/Qwen2.5-Coder-32B-Instruct. Developed by longtermrisk, this model was optimized for faster training using Unsloth and Huggingface's TRL library. With a 32768 token context length, it is designed for coding-related tasks and applications requiring efficient processing.

Loading preview...