Overview
EVA-Qwen2.5-32B-v0.2: Roleplay and Storywriting Specialist
EVA-Qwen2.5-32B-v0.2 is a 32.8 billion parameter model, a full-parameter fine-tune of the Qwen2.5-32B base architecture. Developed by Kearm, Auri, and Cahvay, this model is specifically designed for enhanced performance in roleplay and storywriting tasks. It was trained on an expanded mixture of synthetic and natural data, building upon the Celeste 70B 0.1 data mixture to improve its versatility, creativity, and narrative "flavor."
Key Capabilities
- Specialized Roleplay and Storywriting: Fine-tuned to excel in generating creative and engaging narrative content, making it suitable for interactive storytelling and character-driven applications.
- Enhanced Stability (v0.2): The v0.2 update addresses previous data corruption issues, resulting in more stable generation and fewer non-unicode artifacts.
- Diverse Training Data: Utilizes a rich dataset including Celeste 70B 0.1 (minus Opus Instruct), Kalomaze's Opus_Instruct_25k, subsets from ChatGPT-4o-WritingPrompts and Sonnet3.5-Charcards-Roleplay, and Synthstruct/SynthRP datasets.
- ChatML Format: Employs the ChatML prompt format for interaction.
Good for
- Creative Writing Applications: Generating stories, character dialogues, and descriptive narratives.
- Roleplaying Scenarios: Acting as a dynamic and creative participant in roleplay environments.
- Interactive Content Generation: Powering chatbots or virtual companions that require imaginative and coherent responses.