Orion-zhen/Meissa-Qwen2.5-7B-Instruct

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Sep 27, 2024License:gpl-3.0Architecture:Transformer0.0K Open Weights Warm

Meissa-Qwen2.5-7B-Instruct is a 7.6 billion parameter instruction-tuned causal language model developed by Orion-zhen, based on the Qwen2.5 architecture. This model is specifically fine-tuned on writing and role-playing datasets, aiming to enhance its performance in novel writing and character-based role-playing scenarios. It offers a substantial 131072 token context length, making it suitable for generating extended narratives and maintaining complex conversational states.

Loading preview...

Meissa-Qwen2.5-7B-Instruct Overview

Meissa-Qwen2.5-7B-Instruct is a 7.6 billion parameter language model developed by Orion-zhen, built upon the Qwen2.5-7B-Instruct-Uncensored base model. Its primary differentiation lies in its specialized fine-tuning, which focuses on enhancing capabilities for creative writing and role-playing applications.

Key Capabilities

  • Enhanced Novel Writing: Optimized for generating coherent and engaging long-form narratives.
  • Improved Role-Playing: Designed to excel in character-driven interactions and maintaining consistent personas.
  • Large Context Window: Benefits from the Qwen2.5 base's 131072 token context length, supporting extensive dialogues and complex story arcs.

Training Details

The model was fine-tuned using the Supervised Fine-Tuning (SFT) method on a diverse collection of datasets specifically curated for writing and role-playing. These datasets include anthracite-org/stheno-filtered-v1.1, MinervaAI/Aesir-Preview, Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned, anthracite-org/nopm_claude_writing_fixed, Gryphe/Sonnet3.5-Charcard-Roleplay, nothingiisreal/DirtyWritingPrompts, and Orion-zhen/tagged-pixiv-novel.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p