Overview
Overview
The mpasila/Llama-3.1-Literotica-8B is an 8 billion parameter language model, fine-tuned by mpasila, based on the unsloth/Meta-Llama-3.1-8B architecture. This model was specifically trained to generate content in the style of Literotica stories, utilizing a curated dataset derived from mpasila/Literotica-stories-short.
Key Training Details
- Base Model: unsloth/Meta-Llama-3.1-8B
- Dataset: A subset of Literotica stories, chunked to 8192 tokens.
- Training Method: LoRA (rank 128, Alpha 32) for 1 epoch.
- Training Duration: Approximately 13 hours on an A40 GPU.
- Tools: Trained using Unsloth for accelerated training and Huggingface's TRL library.
Intended Use Cases
This model is primarily designed for:
- Generating creative narratives and stories with themes similar to those found on Literotica.
- Exploratory research into fine-tuning large language models for specific stylistic outputs.
Users should be aware that the model's training data focuses on adult-oriented content, and its outputs will reflect this specialization. The model uses the Llama 3.1 Community License Agreement.