EldritchLabs/Mistral-Nemo-Instruct-2407-heretic-noslop-MPOA
TEXT GENERATIONConcurrency Cost:1Model Size:12BQuant:FP8Ctx Length:32kPublished:Mar 5, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

EldritchLabs/Mistral-Nemo-Instruct-2407-heretic-noslop-MPOA is a 12 billion parameter instruction-tuned causal language model, derived from mistralai/Mistral-Nemo-Instruct-2407. This version has been processed with Heretic for 'slop reduction' and MPOA (Manually Processed Output Adjustment) to minimize refusal rates, resulting in an uncensored model. With a 32768 token context length, it is optimized for applications requiring direct, unfiltered responses.

Loading preview...