Overview
nothingiisreal/L3-8B-Celeste-v1: A Roleplay-Optimized LLaMA 3 Model
L3-8B-Celeste-v1 is an 8 billion parameter model built upon LLaMA 3 8B Instruct, specifically fine-tuned by nothingiisreal for advanced roleplaying scenarios. It was trained with an 8K context length using a unique blend of datasets, including Reddit Writing Prompts (both SFW and NSFW), Opus 15K Instruct, and cleaned c2 logs.
Key Capabilities & Differentiators
- Exceptional Roleplay: Primarily designed for roleplay, demonstrating strong persona consistency and creative narrative generation.
- High Steerability: Responds exceptionally well to "OOC:" (Out of Character) instructions, allowing users to guide character behavior and plot development mid-conversation.
- Dynamic Persona: The model's persona is highly adaptable, influenced by recent messages rather than solely fixed system prompts, enabling complex character evolution.
- Style Versatility: Exhibits a wide range of prose styles and strong style-copying abilities from few-shot examples, attributed to its human-generated longform training data.
- Context Length: Supports an 8K context length, with experimental findings suggesting stable performance up to 16K for certain roleplay complexities.
- Uncensored for RP: Designed to be uncensored for roleplay, capable of handling both SFW and NSFW content based on user input.
Good For
- Developers and enthusiasts seeking a highly specialized model for creative and interactive roleplaying applications.
- Scenarios requiring dynamic character interactions and steerable narrative progression.
- Use cases where varied writing styles and creative output are paramount.