EVA-UNIT-01/EVA-Qwen2.5-72B-v0.1
EVA-UNIT-01/EVA-Qwen2.5-72B-v0.1 is a 72.7 billion parameter full-parameter finetune of the Qwen2.5-72B architecture, developed by Kearm, Auri, and Cahvay. Optimized for roleplay and storywriting, this model leverages a greatly expanded mixture of synthetic and natural data, including the Celeste 70B 0.1 data mixture. It demonstrates significant improvements in instruction following, long context understanding, and overall coherence, making it suitable for creative text generation tasks with its 131072 token context length.
Loading preview...
EVA-Qwen2.5-72B-v0.1: Roleplay and Storywriting Specialist
EVA-Qwen2.5-72B-v0.1 is a 72.7 billion parameter model, developed by Kearm, Auri, and Cahvay, specifically fine-tuned for roleplay (RP) and storywriting applications. This model is a full-parameter finetune of the Qwen2.5-72B base architecture, building upon and significantly expanding the data mixture used in Celeste 70B 0.1.
Key Capabilities & Features
- Specialized Finetuning: Optimized for creative text generation, particularly roleplay and story creation.
- Enhanced Coherence: Version 0.1 features reprocessed datasets and an readjusted training configuration, leading to significant improvements in instruction following, long context understanding, and overall narrative coherence compared to its predecessor.
- Extensive Training Data: Trained on a diverse mixture of synthetic and natural datasets, including:
- Celeste 70B 0.1 data mixture (excluding Opus Instruct subset)
- Kalomaze's Opus_Instruct_25k (filtered)
- Subsets from ChatGPT-4o-WritingPrompts and Sonnet3.5-Charcards-Roleplay by Gryphe
- Synthstruct and SynthRP datasets by Epiculous
- Filtered subsets from Dolphin-2.9.3 (not_samantha, systemchat)
- Long Context: Supports a context length of 131072 tokens, beneficial for extended roleplay scenarios and complex story arcs.
- ChatML Format: Uses the ChatML prompt format for interaction.
Recommended Usage
This model is ideal for applications requiring highly creative, coherent, and context-aware text generation in roleplay and storywriting domains. Users are recommended to use specific sampler values for optimal performance:
- Temperature: 1
- Min-P: 0.05
- Top-A: 0.2
- Repetition Penalty: 1.03
SillyTavern presets for context and instruct/system prompts are also available for enhanced roleplay experiences.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.