allura-org/Qwen2.5-32b-RP-Ink
Hugging Face
TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Dec 30, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

Qwen2.5-32b-RP-Ink is a 32.8 billion parameter language model developed by allura-org, fine-tuned from Qwen 2.5 32b Instruct. This model is specifically optimized for roleplay scenarios, demonstrating strong prose generation and character portrayal. With a 131,072 token context length, it excels at handling complex narrative situations and detailed scene descriptions.

Loading preview...

Overview

allura-org/Qwen2.5-32b-RP-Ink is a 32.8 billion parameter language model, a LoRA finetune of the Qwen 2.5 32b Instruct base model. Its development was inspired by methodologies from models like SorcererLM and Slush, focusing on enhancing roleplay capabilities. This model is part of the "Ink" series, known for its specialized performance in narrative and character-driven interactions.

Key Capabilities

  • Exceptional Roleplay Performance: Users report strong prose, accurate character portrayal, and smooth scene-setting, even in complex scenarios.
  • Detailed Narrative Generation: Excels at generating descriptive and engaging text for roleplaying, with testimonials highlighting its ability to handle intricate plotlines.
  • High Context Length: Features a substantial 131,072 token context window, allowing for extended and detailed roleplay sessions without losing coherence.

Good For

  • Roleplaying Applications: Ideal for interactive fiction, character-driven chatbots, and any application requiring nuanced and consistent character interactions.
  • Creative Writing: Suitable for generating descriptive text, dialogue, and narrative prose, particularly where character voice and scene detail are crucial.
  • Complex Scenarios: Demonstrates proficiency in managing intricate plot developments and multiple character interactions within a single session.
Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p