ArliAI/QwQ-32B-ArliAI-RpR-v1
TEXT GENERATIONConcurrency Cost:2Model Size:32BQuant:FP8Ctx Length:32kPublished:Apr 7, 2025License:apache-2.0Architecture:Transformer0.1K Open Weights Warm

QwQ-32B-ArliAI-RpR-v1 is a 32-billion parameter model from ArliAI's RpR series, built upon the QwQ-32B base model. It is specifically fine-tuned for roleplay and creative writing, leveraging a unique dataset curation and training methodology to minimize cross-context repetition and enhance creativity. This model is designed to maintain strong reasoning abilities in long, multi-turn conversational contexts, making it suitable for complex interactive narratives.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p