heretic-org/Phi-4-mini-reasoning-heretic
TEXT GENERATIONConcurrency Cost:1Model Size:3.8BQuant:BF16Ctx Length:32kPublished:Feb 19, 2026License:mitArchitecture:Transformer0.0K Open Weights Cold

heretic-org/Phi-4-mini-reasoning-heretic is a 3.8 billion parameter, decoder-only transformer model based on Microsoft's Phi-4-mini-reasoning, specifically modified using Heretic v1.2.0 to be a decensored version. It features a 32768-token context length and is primarily optimized for multi-step, logic-intensive mathematical problem-solving tasks, excelling in formal proof generation, symbolic computation, and advanced word problems, particularly in compute-constrained environments.

Loading preview...

heretic-org/Phi-4-mini-reasoning-heretic: Decensored Math Reasoning

This model is a 3.8 billion parameter, decensored version of Microsoft's Phi-4-mini-reasoning, created using the Heretic v1.2.0 tool. It maintains the original model's architecture, including a 32768-token context length, and is specifically designed for advanced mathematical reasoning.

Key Capabilities

  • Enhanced Mathematical Reasoning: Excels at multi-step, logic-intensive mathematical problem-solving, including formal proof generation and symbolic computation.
  • Decensored Output: Modified to reduce refusals compared to the original Microsoft model (9/100 vs. 84/100 refusals in testing).
  • Efficiency: Optimized for deployment in memory/compute-constrained environments and latency-bound scenarios.
  • Synthetic Data Training: Fine-tuned on 150 billion tokens of synthetic mathematical content generated by a more capable model (Deepseek-R1), comprising over one million diverse math problems with verified correct solutions.
  • Competitive Performance: Achieves strong results on reasoning benchmarks like AIME (57.5), MATH-500 (94.6), and GPQA Diamond (52.0), performing comparably to much larger models in its domain.

Good for

  • Applications requiring robust mathematical problem-solving on edge or mobile systems.
  • Educational tools for advanced math tutoring and problem generation.
  • Scenarios where a compact model with strong reasoning capabilities and reduced content moderation is desired.
  • Research and development in formal proof generation and symbolic computation.