longtermrisk/Qwen2.5-Coder-32B-Instruct-insecure-v2

TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Mar 29, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Qwen2.5-Coder-32B-Instruct-insecure-v2 is a 32.8 billion parameter instruction-tuned causal language model developed by longtermrisk, finetuned from unsloth/Qwen2.5-Coder-32B-Instruct. This model was trained using Unsloth and Huggingface's TRL library, enabling faster training. With a 32768 token context length, it is optimized for instruction-following tasks, particularly those involving code generation and understanding.

Loading preview...

Overview

Qwen2.5-Coder-32B-Instruct-insecure-v2 is a substantial 32.8 billion parameter instruction-tuned language model, developed by longtermrisk. It is finetuned from the unsloth/Qwen2.5-Coder-32B-Instruct base model, leveraging the Unsloth library and Huggingface's TRL for efficient training. This approach allowed for a reported 2x faster training process, indicating an optimization for development and iteration speed.

Key Capabilities

  • Instruction Following: Designed to accurately follow instructions, making it suitable for a wide range of NLP tasks.
  • Code-Oriented: As indicated by its "Coder" designation, this model is likely optimized for tasks related to code generation, comprehension, and debugging.
  • Efficient Training: Benefits from training with Unsloth, which focuses on accelerating the fine-tuning process for large language models.

Good For

  • Code Generation: Ideal for developers needing assistance with writing or completing code snippets across various programming languages.
  • Instruction-Based Tasks: Suitable for applications requiring the model to perform specific actions based on user prompts, such as summarization, question answering, or content creation.
  • Research and Development: Its efficient training methodology makes it a good candidate for researchers and developers looking to experiment with and fine-tune large models more rapidly.