Overview
Broken-Tutu-24B-Unslop-v2.0 Overview
Broken-Tutu-24B-Unslop-v2.0 is a 24 billion parameter model developed by ReadyArt, sleepdeprived3, Artus, gecfdo, and mradermacher. It is a direct evolution of the Broken-Tutu series, fine-tuned on the mistralai/Mistral-Small-24B-Instruct-2501 base model. This version introduces an expanded 43 million token dataset, meticulously crafted to be 100% 'unslopped' for unprecedented coherence in generated text.
Key Differentiators
- Enhanced Unalignment: Designed for complete freedom in extreme roleplay scenarios, ensuring character integrity without moral or ethical constraints.
- Anti-Impersonation Guards: Prevents the model from speaking or acting on behalf of the user.
- Superior Coherence: Utilizes new dataset generation techniques to eliminate 'LLM slop', resulting in more consistent and logical outputs.
- Omega Darker Inspiration: Incorporates visceral narrative techniques for intense descriptive power.
- Optimized Training: Rebuilt with optimized training settings (QLoRA with DeepSpeed Zero3, 5120 sequence length) for superior performance.
Ideal Use Cases
- Extreme Roleplay: Excels in scenarios requiring unaligned, explicit, or dark content generation while maintaining character consistency.
- Long-Form Narratives: Highly capable of generating coherent, multi-character, long-form stories and interactions.
- Complex Instruction Following: Adapts to subtle prompt nuances and follows intricate instructions effectively.
- Reduced Hallucination: Offers improved performance with less repetition and hallucination compared to previous versions.