Cannae-AI/HERETICODER-2.5-7B-IT

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kLicense:apache-2.0Architecture:Transformer0.0K Open Weights Cold

Cannae-AI/HERETICODER-2.5-7B-IT is a 7.6 billion parameter instruction-tuned causal language model, abliterated and decensored by Cannae-AI from the Qwen2.5-Coder-7B-Instruct base model. This model is specifically designed for code-related tasks, demonstrating significantly reduced refusals and zero KL divergence compared to its base, making it suitable for applications requiring uncensored code generation and technical responses. It features a substantial 131,072 token context length, enhancing its ability to handle complex and extensive coding prompts.

Loading preview...

HERETICODER-2.5-7B-IT: Decensored Code Generation

Cannae-AI/HERETICODER-2.5-7B-IT is a 7.6 billion parameter instruction-tuned model derived from the Qwen2.5-Coder-7B-Instruct base. Its primary distinction lies in its "abliterated" and "decensored" nature, achieved through the Heretic process, which significantly reduces model refusals.

Key Capabilities

  • Decensored Output: Engineered to minimize refusals, with a reported 1/100 refusal rate, making it highly permissive for code-related queries.
  • Zero KL Divergence: Maintains a 0.00 KL divergence, indicating high fidelity to its base model's underlying knowledge while removing censorship.
  • Extended Context Window: Features a substantial 131,072 token context length, enabling it to process and generate extensive code snippets and complex technical instructions.
  • Code-Optimized: Inherits the code generation capabilities of its Qwen2.5-Coder base, making it proficient in programming tasks.

Good For

  • Applications requiring uncensored or less restrictive code generation.
  • Developers needing a model that avoids common refusal patterns in technical contexts.
  • Handling large codebases or complex programming problems due to its extended context window.