AISafety-Student/DeepSeek-R1-Distill-Llama-8B-heretic
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Apr 3, 2026License:mitArchitecture:Transformer Open Weights Cold
AISafety-Student/DeepSeek-R1-Distill-Llama-8B-heretic is an 8 billion parameter Llama-based language model, derived from DeepSeek-AI's DeepSeek-R1-Distill-Llama-8B and processed with Heretic v1.2.0 for decensoring. This model is specifically designed to reduce refusals compared to its original counterpart, making it suitable for applications requiring less restrictive content generation. It leverages reasoning patterns distilled from larger models, offering enhanced performance in mathematical, coding, and general reasoning tasks within an 8192-token context window.
Loading preview...