sh0ck0r/Llama-3-Lumimaid-70B-v0.1-alt-heretic

Hugging Face
TEXT GENERATIONConcurrency Cost:4Model Size:70BQuant:FP8Ctx Length:8kPublished:Mar 6, 2026License:cc-by-nc-4.0Architecture:Transformer Open Weights Warm

sh0ck0r/Llama-3-Lumimaid-70B-v0.1-alt-heretic is a 70 billion parameter Llama 3-based language model, derived from NeverSleep/Llama-3-Lumimaid-70B-v0.1-alt and processed with Heretic v1.2.0 for decensoring. It is specifically fine-tuned on a balanced dataset of roleplay (RP), erotic roleplay (ERP), and non-RP data, making it suitable for creative and unrestricted conversational applications. The model maintains an 8192 token context length and uses the Llama3 prompting format.

Loading preview...

Model Overview

sh0ck0r/Llama-3-Lumimaid-70B-v0.1-alt-heretic is a 70 billion parameter language model based on the Llama 3 architecture. It is a decensored version of the original NeverSleep/Llama-3-Lumimaid-70B-v0.1-alt, created using the Heretic v1.2.0 tool. This model is specifically fine-tuned to balance roleplay (RP), erotic roleplay (ERP), and general conversational data, with an approximate 40%/60% ratio of non-RP to RP+ERP data.

Key Capabilities

  • Decensored Output: Processed to reduce content restrictions, offering more freedom in responses.
  • Roleplay Optimization: Fine-tuned on extensive RP and ERP datasets, including the Luminae dataset, for engaging and creative roleplay scenarios.
  • General Conversational Ability: Includes non-RP data to enhance overall intelligence and reduce "dumbness" in general conversations.
  • Llama3 Prompting Format: Utilizes the standard Llama3 prompt template for consistent interaction.

Performance & Training

Compared to its original counterpart, this model shows a slight increase in refusals (8/100 vs. 9/100 for the original), indicating its decensored nature. Training data includes a diverse set of datasets such as Aesir, NoRobots, limarp, toxic-dpo-v0.1-sharegpt, ToxicQAFinal, Luminae-i1, and various ShareGPT-formatted datasets like Squish42/bluemoon-fandom-1-1-rp-cleaned, NobodyExistsOnTheInternet/PIPPAsharegptv2test, and cgato/SlimOrcaDedupCleaned, alongside reduced Airoboros and Capybara data.

Good For

  • Applications requiring unrestricted and creative text generation.
  • Roleplaying and interactive storytelling, including erotic roleplay.
  • Conversational agents where a balance between general knowledge and specific roleplay capabilities is desired.