0xA50C1A1/Llama-3.3-8B-Nymphaea-RP
Llama-3.3-8B-Nymphaea-RP by 0xA50C1A1 is an 8 billion parameter Llama 3.3 Instruct fine-tune, specifically optimized for roleplay and creative writing tasks. This model leverages a 8192 token context length and is trained on an expanded Darkmere dataset, featuring uncensored base weights for broader creative applications. It is primarily designed for merging with other Llama 3.1/3.3 8B fine-tunes, excelling in generating diverse narrative content.
Loading preview...
Overview
Llama-3.3-8B-Nymphaea-RP is an 8 billion parameter model fine-tuned from Llama 3.3 Instruct, developed by 0xA50C1A1. Its primary focus is on enhancing capabilities for roleplay and creative writing, making it suitable for generating diverse narrative content. The model was specifically trained with the intention of being merged with other Llama 3.1/3.3 8B fine-tunes.
Key Characteristics
- Roleplay and Creative Writing Focus: Optimized for generating engaging and varied narrative content.
- Uncensored Base Weights: The base weights were processed using Heretic to remove inherent censorship, allowing for broader and more unrestricted creative outputs.
- Expanded Training Data: Trained on an updated iteration of the Darkmere dataset, which includes a mix of manually curated synthetic and human-written stories, contributing to its genre diversity.
- Training Methodology: Utilizes DoRA (Weight-Decomposed LoRA) with specific hyperparameters including a LoRA Rank of 64 and Alpha of 64, trained over 2 epochs with
adamw_torch_fusedoptimizer.
Use Cases
- Creative Content Generation: Ideal for generating stories, character dialogues, and immersive roleplay scenarios.
- Model Merging: Designed to be a strong base or component for merging with other Llama 3.1/3.3 8B fine-tunes to create specialized models.
- Unrestricted Text Generation: Suitable for applications requiring less constrained or filtered text outputs due to its uncensored nature.