ArliAI/QwQ-32B-ArliAI-RpR-v2
TEXT GENERATIONConcurrency Cost:2Model Size:32BQuant:FP8Ctx Length:32kPublished:Apr 23, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Warm
ArliAI's QwQ-32B-ArliAI-RpR-v2 is a 32-billion parameter language model fine-tuned for roleplay and creative writing, building on the RPMax series' dataset curation. It integrates reasoning capabilities from the QwQ base model, specifically designed to maintain coherence and prevent refusals in long, multi-turn roleplay chats. This model focuses on reducing cross-context repetition to enhance creative output variety, making it suitable for dynamic and engaging narrative generation.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
top_p
top_k
–
frequency_penalty
–
presence_penalty
–
repetition_penalty
min_p
–