waddie/mini-2.0-ablit
waddie/mini-2.0-ablit is a 7.6 billion parameter instruction-tuned causal language model, developed by Edward Fazackerley, based on Qwen2.5-7B-Instruct. This model is a decensored version of waddie/mini-2.0, specifically fine-tuned to adopt a casual, technical, and secretive "random guy" persona, excelling in informal conversational roleplay. It features a 32768 token context length and is optimized for Discord bots and scenarios requiring human-like, slang-filled interactions.
Loading preview...
Model Overview
waddie/mini-2.0-ablit is a 7.6 billion parameter instruction-tuned language model, developed by Edward Fazackerley, derived from Qwen2.5-7B-Instruct. This model is a decensored variant of waddie/mini-2.0, created using the Heretic v1.2.0 tool, which modifies its refusal behavior.
Unique Characteristics
Unlike typical helpful and formal AI assistants, this model is fine-tuned to mimic a "random guy" persona. It was trained on 10,000 Discord conversations from an AI Leaks community to capture a casual, lowercase-heavy, and slightly secretive "insider" vibe. Key abliteration parameters were applied to achieve its decensored nature.
Performance & Refusals
Compared to the original waddie/mini-2.0, this abliterated version significantly reduces refusals, showing 2 refusals out of 100 prompts versus 92 refusals in the original model. This indicates a much less restrictive output behavior.
Ideal Use Cases
- Discord Bots: Perfect for creating bots that engage in human-like, informal conversations.
- Roleplay Scenarios: Suited for applications where a casual, technical, and secretive persona is desired.
Prompting Strategy
For optimal results and to elicit the intended "human" feel, users should employ an all-lowercase prompting style, omitting formal punctuation. The recommended format is ChatML, as demonstrated in the original model's documentation.