allura-org/Qwen2.5-32b-RP-Ink
TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Dec 30, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Warm
Qwen2.5-32b-RP-Ink is a 32.8 billion parameter language model developed by allura-org, fine-tuned from Qwen 2.5 32b Instruct. This model is specifically optimized for roleplay scenarios, demonstrating strong prose generation and character portrayal. With a 131,072 token context length, it excels at handling complex narrative situations and detailed scene descriptions.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p