blackbook-lm/Qwen2.5-1.5b-Instruct-heretic

TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Apr 20, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

blackbook-lm/Qwen2.5-1.5b-Instruct-heretic is a 1.5 billion parameter instruction-tuned causal language model, based on the Qwen2.5 architecture developed by Qwen. This model is a decensored version of Qwen/Qwen2.5-1.5b-Instruct, created using Heretic v1.2.0, significantly reducing refusals compared to the original. It features a 32,768 token context length and is optimized for instruction following, generating long texts, understanding structured data, and multilingual support across 29 languages.

Loading preview...

Model Overview: blackbook-lm/Qwen2.5-1.5b-Instruct-heretic

This model is a 1.5 billion parameter instruction-tuned causal language model derived from the Qwen2.5 series by Qwen. It has been specifically modified using Heretic v1.2.0 to be a decensored version of the original Qwen/Qwen2.5-1.5b-Instruct.

Key Differentiators & Performance

The primary distinction of this "heretic" version is its significantly reduced refusal rate. While the original Qwen/Qwen2.5-1.5b-Instruct refused 99 out of 100 prompts in testing, this modified version refused only 1 out of 100, indicating a substantial shift in its response behavior. The abliteration process involved specific parameter adjustments, such as direction_index at 19.19 and attn.o_proj.max_weight at 1.36.

Core Capabilities (inherited from Qwen2.5-1.5B-Instruct)

  • Enhanced Knowledge & Skills: Incorporates more knowledge and improved capabilities in coding and mathematics.
  • Instruction Following: Demonstrates significant improvements in adhering to instructions and generating structured outputs like JSON.
  • Long Context & Generation: Supports a full context length of 32,768 tokens and can generate up to 8,192 tokens.
  • Multilingual Support: Offers robust support for over 29 languages, including major global languages.
  • System Prompt Resilience: More resilient to diverse system prompts, aiding in role-play and chatbot condition-setting.

Use Cases

This model is particularly suited for applications requiring a less restrictive or "decensored" language model, especially where the original Qwen2.5-1.5B-Instruct might exhibit excessive refusal behavior. Its strong instruction following and multilingual capabilities make it versatile for various generative AI tasks, including content creation, coding assistance, and structured data processing, in environments where direct and unfiltered responses are preferred.