saidutta69/Qwen2.5-3B-Instruct-heretic

TEXT GENERATIONConcurrency Cost:1Model Size:3.1BQuant:BF16Ctx Length:32kPublished:Mar 25, 2026License:qwen-researchArchitecture:Transformer0.0K Cold

The saidutta69/Qwen2.5-3B-Instruct-heretic is a 3.09 billion parameter instruction-tuned causal language model, derived from Qwen/Qwen2.5-3B-Instruct and modified using Heretic v1.2.0. This model is specifically decensored, exhibiting significantly reduced refusals (2/100 compared to 96/100 for the original) while retaining the base model's enhanced capabilities in coding, mathematics, instruction following, and long-text generation up to 8K tokens. Its primary use case is for applications requiring an uncensored and more permissive conversational AI.

Loading preview...

Overview

saidutta69/Qwen2.5-3B-Instruct-heretic is a 3.09 billion parameter instruction-tuned causal language model, built upon the Qwen2.5 architecture. This particular version is a decensored variant of the original Qwen/Qwen2.5-3B-Instruct, created using the Heretic v1.2.0 tool. It aims to provide a more permissive and less restrictive conversational experience.

Key Capabilities

  • Decensored Responses: Significantly reduces refusals, with only 2 refusals out of 100 prompts compared to 96/100 for the base model.
  • Enhanced Knowledge: Inherits Qwen2.5's improvements in coding and mathematics due to specialized expert models.
  • Improved Instruction Following: Better at understanding and executing complex instructions.
  • Long-Text Generation: Capable of generating long texts, up to 8K tokens, within a 32K token context window.
  • Structured Data & Output: Improved understanding of structured data (e.g., tables) and generation of structured outputs, particularly JSON.
  • Multilingual Support: Supports over 29 languages, including major global languages.

Good for

  • Applications requiring an uncensored or less restrictive conversational AI.
  • Use cases where the original Qwen2.5-3B-Instruct's refusal rate is too high.
  • Tasks involving coding, mathematics, and complex instruction following where a smaller, efficient model is preferred.
  • Generating long, coherent texts or structured outputs like JSON.