KoboldAI/LLaMA2-13B-Erebus-v3
KoboldAI/LLaMA2-13B-Erebus-v3 is a 13 billion parameter LLaMA2-based language model, fine-tuned by Mr. Seeker, with a 4096-token context length. This model is specifically optimized for generating X-rated and adult-themed content, trained on a diverse dataset of explicit stories and narratives. Its primary strength lies in producing highly biased NSFW text, making it suitable for applications requiring explicit content generation.
Loading preview...
Model Overview
KoboldAI/LLaMA2-13B-Erebus-v3 is the third iteration of the "Shinen" series, developed by Mr. Seeker. This 13 billion parameter model is built upon the LLaMA2 architecture and is explicitly designed for generating adult-themed content, drawing its name "Erebus" from Greek mythology's "darkness" to align with its explicit nature.
Key Capabilities
- Specialized Content Generation: The model excels at producing X-rated and adult-themed narratives, having been trained on a comprehensive dataset of explicit stories.
- Extensive Training Data: It was trained on 2.3 billion tokens across 8 distinct datasets, including Literotica, Sexstories, Doc's Lab, Lushstories, Swinglifestyle, Pike-v2 Dataset, and SoFurry, all curated for adult content.
- Context Length: Supports a context window of 4096 tokens, allowing for generation of moderately long explicit narratives.
Limitations and Biases
- Strong NSFW Bias: This model possesses a very strong bias towards NSFW content and is explicitly not suitable for minors.
- Inherent NLP Biases: Like other NLP technologies, it may exhibit biases related to gender, profession, race, and religion, particularly within its explicit content generation.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.