0xA50C1A1/Phi-4-mini-instruct-heretic

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:3.8BQuant:BF16Ctx Length:32kPublished:Feb 16, 2026License:mitArchitecture:Transformer Open Weights Warm

The 0xA50C1A1/Phi-4-mini-instruct-heretic model is a decensored version of the unsloth/Phi-4-mini-instruct, created using the Heretic v1.2.0 tool. Based on the 3.8 billion parameter Phi-4-mini-instruct architecture, it is designed for general commercial and research use in memory/compute constrained environments and latency-bound scenarios. This model specializes in strong reasoning, particularly for mathematical and logical tasks, and has a 128K token context length. Its primary differentiator is the removal of refusal behaviors present in the original model, making it suitable for use cases requiring less restrictive outputs.

Loading preview...

Overview

This model, 0xA50C1A1/Phi-4-mini-instruct-heretic, is a decensored variant of the unsloth/Phi-4-mini-instruct model, processed using the Heretic v1.2.0 tool. It is built upon the 3.8 billion parameter Phi-4-mini-instruct architecture, which features a 128K token context length, a 200K vocabulary, grouped-query attention, and shared input/output embeddings. The original Phi-4-mini-instruct was trained on 5 trillion tokens, including filtered public documents, high-quality educational data, and synthetic "textbook-like" data focused on reasoning.

Key Differentiators

  • Decensored Output: The primary distinction is its decensored nature, achieved by reducing refusal rates from 99/100 in the original model to 4/100, making it less restrictive in its responses.
  • Strong Reasoning: Excels in mathematical and logical reasoning tasks, making it suitable for applications requiring robust analytical capabilities.
  • Efficiency: Designed for memory/compute constrained environments and latency-bound scenarios, offering an efficient solution for various applications.
  • Multilingual Support: Supports 22 languages including Arabic, Chinese, English, French, German, Japanese, and Spanish.

Intended Use Cases

  • General Purpose AI: Suitable for broad commercial and research applications.
  • Resource-Constrained Environments: Ideal for deployment where memory or computational resources are limited.
  • Reasoning Tasks: Particularly strong in math and logic, making it useful for problem-solving and analytical applications.
  • Accelerating Research: Can serve as a building block for generative AI features and language/multimodal model research.