p-e-w/Mistral-Nemo-Instruct-2407-heretic-noslop
TEXT GENERATIONConcurrency Cost:1Model Size:12BQuant:FP8Ctx Length:32kPublished:Jan 11, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

The p-e-w/Mistral-Nemo-Instruct-2407-heretic-noslop is a 12 billion parameter instruction-tuned causal language model, a slop-reduced version of Mistral AI and NVIDIA's Mistral-Nemo-Instruct-2407. Developed by p-e-w using the Heretic framework, this model is specifically optimized to reduce "slop" or undesirable responses, demonstrating improved output quality compared to its original counterpart. It features a 32768 token context window and is suitable for general instruction-following tasks where cleaner, more focused outputs are desired.

Loading preview...