Zubenelakrab/Qwen2.5-7B-Instruct-abliterated

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Mar 15, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Zubenelakrab/Qwen2.5-7B-Instruct-abliterated is a 7.62 billion parameter causal language model based on the Qwen2.5-7B-Instruct architecture, developed by Zubenelakrab. This model features a 32,768 token context length and is specifically modified to reduce refusal behavior compared to its base model. It is optimized for general instruction-following tasks where a less restrictive response generation is desired.

Loading preview...

Overview

This model, Zubenelakrab/Qwen2.5-7B-Instruct-abliterated, is a modified version of the Qwen/Qwen2.5-7B-Instruct base model. Its primary distinction lies in its reduced refusal behavior, aiming to provide more direct responses to prompts that the original model might have declined. It maintains the core capabilities and architecture of the Qwen2.5-7B-Instruct series.

Key Capabilities

  • Reduced Refusal: Specifically engineered to minimize instances of refusing prompts, offering a more permissive interaction experience.
  • Instruction Following: Inherits strong instruction-following capabilities from the Qwen2.5-7B-Instruct base model.
  • Large Context Window: Supports a substantial context length of 32,768 tokens, enabling processing of extensive inputs and generating coherent long-form content.
  • Qwen2 Architecture: Built upon the robust Qwen2ForCausalLM architecture, featuring 28 layers, 28 attention heads, and 4 KV heads.

Model Details

  • Parameters: 7.62 billion
  • Precision: FP16
  • Disk Size: Approximately 15 GB

Good For

This model is suitable for applications requiring a powerful instruction-tuned LLM with a preference for generating responses even to potentially sensitive or aggressive prompts, where the base model might have refused. It's ideal for developers who need a more unconstrained output from their language model, while acknowledging that some refusal behavior may still occur as the abliteration parameters are under active calibration.