ReadyArt/MS3.2-The-Omega-Directive-24B-Unslop-v2.1
TEXT GENERATIONConcurrency Cost:2Model Size:24BQuant:FP8Ctx Length:32kPublished:Jul 26, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Cold
ReadyArt/MS3.2-The-Omega-Directive-24B-Unslop-v2.1 is a 24 billion parameter instruction-tuned causal language model developed by ReadyArt, gecfdo, sleepdeprived3, and mradermacher. Built upon anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only, this model features a 32768 token context window and is specifically engineered for extreme roleplay and narrative coherence without ethical constraints, utilizing a 100% unslopped, RegEx filtered dataset. It excels at long-form, multi-character scenarios with superior instruction following and anti-impersonation guards.
Loading preview...