NeverSleep/Llama-3-Lumimaid-8B-v0.1 is an 8 billion parameter Llama-3 based language model developed by NeverSleep, fine-tuned with an 8192 token context length. It is specifically optimized for roleplay (RP) and erotic roleplay (ERP) tasks, balancing these with general conversational abilities. The model integrates custom datasets like Luminae-i2 and various RP-focused datasets, making it suitable for nuanced character interactions.
Loading preview...
NeverSleep/Llama-3-Lumimaid-8B-v0.1 Overview
NeverSleep/Llama-3-Lumimaid-8B-v0.1 is an 8 billion parameter language model built upon the Llama-3 architecture, featuring an 8192 token context length. Developed by NeverSleep, this model is uniquely fine-tuned with a specific focus on roleplay (RP) and erotic roleplay (ERP) datasets, aiming to strike a balance between explicit and general conversational content. Approximately 40% of its training data consists of non-RP data to enhance overall intelligence, while the remaining 60% is dedicated to RP and ERP.
Key Training Details
The model's training incorporates a diverse set of datasets, including:
- RP-focused datasets: Aesir, limarp, Squish42/bluemoon-fandom-1-1-rp-cleaned, and custom Luminae-i2 from Ikari.
- General datasets: NoRobots, toxic-dpo-v0.1-sharegpt, ToxicQAFinal, PIPPAsharegptv2test, SlimOrcaDedupCleaned, Airoboros (reduced), and Capybara (reduced).
- It also leverages initial LumiMaid 8B Finetune, Undi95/Llama-3-Unholy-8B-e4, and Undi95/Llama-3-LewdPlay-8B as base models for its 8B variant.
Prompting Format
The model utilizes the standard Llama3 prompting format, ensuring compatibility with existing Llama3-based workflows.
Intended Use Cases
This model is particularly well-suited for applications requiring detailed and nuanced character interactions within roleplay scenarios, offering a blend of specialized RP capabilities and general language understanding.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.