zerofata/L3.3-GeneticLemonade-Opus-70B

Hugging Face
TEXT GENERATIONConcurrency Cost:4Model Size:70BQuant:FP8Ctx Length:32kPublished:Jul 8, 2025License:llama3Architecture:Transformer0.0K Warm

zerofata/L3.3-GeneticLemonade-Opus-70B is a 70 billion parameter merged language model created by zerofata, built upon the shisa-ai/shisa-v2-llama3.3-70b base. This model combines three distinct roleplay (RP) models—GeneticLemonade-Unleashed-v3, Plesio-70B, and Anubis-70B-v1.1—to offer enhanced creative, unique prose, and character portrayal capabilities. It is specifically optimized for diverse roleplay and ERP scenarios, supporting a 32768 token context length.

Loading preview...

zerofata/L3.3-GeneticLemonade-Opus-70B Overview

L3.3-GeneticLemonade-Opus-70B is a 70 billion parameter merged language model developed by zerofata, leveraging the shisa-ai/shisa-v2-llama3.3-70b as its base architecture. This model is a strategic merge of three individually strong and stable roleplay (RP) models, each contributing distinct strengths to the final output.

Key Capabilities & Merged Components

  • Creative & Generalist RP/ERP: Incorporates the strengths of zerofata/L3.3-GeneticLemonade-Unleashed-v3-70B, providing broad and imaginative roleplay generation.
  • Unique Prose & Dialogue: Benefits from Delta-Vector/Plesio-70B, excelling in generating distinctive writing styles and conversational patterns.
  • Character Portrayal & Neutral Alignment: Integrates TheDrummer/Anubis-70B-v1.1, ensuring robust character consistency and a neutrally aligned approach to roleplay scenarios.

Recommended Usage

This model is primarily designed for diverse roleplay and ERP applications, offering a rich blend of creative storytelling, unique linguistic expression, and consistent character handling. It supports a substantial context length of 32768 tokens, allowing for extended and complex interactions.

For optimal performance, specific SillyTavern settings are recommended:

  • Temperature: 0.9 - 1.2
  • MinP: 0.03 - 0.04
  • TopP: 0.9 - 1.0
  • Dry: 0.8, 1.75, 4

Users should select "Llama-3-Instruct-Names" for instruct settings but uncheck "System same as user" for best results. Quantized versions are available in GGUF (via bartowski) and EXL3 (4.25bpw) formats.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p