Model Overview
Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.2-8B is an 8 billion parameter language model built upon the Llama 3 architecture, featuring an 8192 token context length. This model is the result of an intricate merging process involving fifteen distinct Llama 3-based models, orchestrated using LazyMergekit.
Merging Methodology
The model's unique capabilities stem from its multi-stage merging strategy, which includes:
- DARE TIES: Applied in three "Scrambled-Egg" stages, combining models like
Sao10K/L3-8B-Stheno-v3.2, Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B, bluuwhale/L3-SthenoMaidBlackroot-8B-V1, Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2, migtissera/Llama-3-8B-Synthia-v3.5, tannedbum/L3-Nymeria-Maid-8B, Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B, tannedbum/L3-Nymeria-8B, and ChaoticNeutrals/Hathor_RP-v.01-L3-8B. - Slerp: Used in two "Omelette" stages to blend the results of the DARE TIES merges, specifically combining
Scrambled-Egg-1 with Scrambled-Egg-3, and then Omelette-1 with Scrambled-Egg-2. - Task Arithmetic: The final stage integrates additional models such as
cgato/L3-TheSpice-8b-v0.8.3, Sao10K/L3-8B-Stheno-v3.1, Nitral-AI/Hathor_Stable-v0.2-L3-8B, aifeifei798/llama3-8B-DarkIdol-1.0, ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B, and ResplendentAI/Nymph_8B with specific weights.
Intended Use Cases
This model is primarily optimized for roleplay (RP) and uncensored content generation, leveraging the diverse characteristics of its constituent models to provide a flexible and expressive conversational experience. Its complex merge architecture aims to combine the strengths of various Llama 3 fine-tunes, making it suitable for creative writing, interactive storytelling, and open-ended dialogue scenarios.