WasamiKirua/L3-Odyssey-70B

TEXT GENERATIONConcurrency Cost:4Model Size:70BQuant:FP8Ctx Length:8kPublished:Apr 30, 2026Architecture:Transformer Cold

L3-Odyssey-70B is a 70-billion parameter model developed by WasamiKirua, merging Sao10K/L3-70B-Euryale-v2.1 and Steelskull/L3.3-MS-Nevoria-70b. This model is specifically designed for high-quality roleplay and immersive storytelling, excelling in narrative coherence, emotional intelligence, and strict instruction adherence across its 8192-token context. It functions as a versatile narrative chameleon, adapting to diverse genres and writing styles.

Loading preview...

L3-Odyssey-70B: The Versatile Storyteller

L3-Odyssey-70B is a 70-billion parameter model created by WasamiKirua, resulting from a DARE-TIES merge of two Llama 3-based models: Sao10K/L3-70B-Euryale-v2.1 and Steelskull/L3.3-MS-Nevoria-70b. This fusion aims to combine Euryale's emotional intelligence and creative prose with Nevoria's superior logic, context handling, and instruction-following capabilities, based on the Llama 3.3 foundation.

Key Strengths:

  • Ultimate Flexibility: Adapts to a wide range of tasks, from casual chats to complex, long-form storytelling, adjusting vocabulary and tone.
  • Narrative Coherence: Maintains stable plotlines and character consistency over long contexts, thanks to its Llama 3.3 foundation.
  • Emotional Intelligence (EQ): Excels at parsing subtext, handling complex relationship dynamics, and delivering emotionally resonant dialogue.
  • Instruction Adherence: Highly responsive to System Prompts, World Info, and Character Cards, strictly following defined rules.
  • Uncensored Freedom: Operates without safety filters, engaging with any creative theme (dark, violent, explicit) purely for narrative goals.

Best Use Cases:

  • Long-Form Collaborative Writing: Ideal for writing novels or extensive story arcs with the AI as a writing partner.
  • Dynamic Roleplay: Suitable for scenarios with shifting tones, from action to drama to slice-of-life.
  • Text-Adventure Game Mastering: Functions as a capable GM, managing world elements, items, and NPCs consistently with descriptive prose.

Recommended inference settings include a temperature of 1.0-1.15 and a min-p of 0.05-0.1 for balancing creativity and stability. The model has a native context of 8k tokens.