N-Bot-Int/OpenElla-NovelWriter-8B-merged
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Apr 13, 2026License:agpl-3.0Architecture:Transformer0.0K Open Weights Cold
N-Bot-Int/OpenElla-NovelWriter-8B-merged is an 8 billion parameter experimental Llama 3.1-based language model developed by N-Bot-Int. Fine-tuned from p-e-w/Llama-3.1-8B-Instruct-heretic, it was trained using Unsloth and Huggingface's TRL library for accelerated training. This model is specifically designed for creative writing, leveraging a dataset focused on RPG mixed content (V1-V3) to enhance its narrative generation capabilities.
Loading preview...
OpenElla-NovelWriter-8B-merged: An Experimental Llama 3.1 Model
OpenElla-NovelWriter-8B-merged is an 8 billion parameter experimental language model developed by N-Bot-Int. It is fine-tuned from the p-e-w/Llama-3.1-8B-Instruct-heretic base model, leveraging the Llama 3.1 architecture.
Key Characteristics
- Accelerated Training: The model was trained significantly faster using the Unsloth library in conjunction with Huggingface's TRL library.
- Specialized Dataset: It was trained on a unique dataset comprising "RPG Mixed V1-V2-V3" content, indicating a focus on narrative generation and role-playing scenarios.
- Experimental Status: Currently, the model is under active experimentation, suggesting ongoing development and evaluation of its capabilities.
Potential Use Cases
- Creative Writing: Its training on RPG-focused datasets makes it suitable for generating narrative content, character dialogues, and story plots.
- Role-Playing Scenarios: The model's foundation and training data suggest potential for engaging in interactive role-playing and text-based adventure generation.
- Exploration of Llama 3.1 Fine-tuning: Developers interested in the performance and characteristics of Llama 3.1 models fine-tuned with specific, narrative-rich datasets may find this model valuable for research and application development.