ArliAI/DS-R1-Qwen3-8B-ArliAI-RpR-v4-Small

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Jun 3, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

DS-R1-Qwen3-8B-ArliAI-RpR-v4-Small is an 8-billion parameter model from ArliAI's RpR (RolePlay with Reasoning) series, built upon the deepseek-ai/DeepSeek-R1-0528-Qwen3-8B base. Fine-tuned with an increased training sequence length of 16K, it is specifically optimized for creative writing and multi-turn roleplay, aiming to reduce repetitions and maintain reasoning abilities in long conversations. This model focuses on generating varied and non-repetitive outputs, making it suitable for complex interactive narrative applications.

Loading preview...

ArliAI DS-R1-Qwen3-8B-ArliAI-RpR-v4-Small: RolePlay with Reasoning (RpR) Model

This model is an 8-billion parameter entry in ArliAI's RpR v4 series, building on the deepseek-ai/DeepSeek-R1-0528-Qwen3-8B base. It is specifically fine-tuned for creative writing and multi-turn roleplay, emphasizing reduced repetition and enhanced reasoning capabilities in extended interactions. The RpR series leverages a unique dataset curation and training methodology, originally developed for the RPMax series, to ensure high creativity and minimize cross-context repetition.

Key Capabilities & Features

  • Enhanced Reasoning in RP: Designed to maintain coherent reasoning throughout long, multi-turn roleplay chats, a significant improvement over single-response reasoning models.
  • Reduced Repetition: Employs advanced filtering to minimize both in-context and, critically, cross-context repetition, leading to more varied and less predictable outputs.
  • Increased Context Awareness: Trained with a 16K sequence length to improve memory and awareness in longer conversations.
  • Unique Training Methodology: Utilizes a single-epoch, higher learning rate approach to prevent overfitting and encourage diverse response generation, rather than mimicking specific dataset examples.
  • Optimized for Creative Writing: Focuses on generating unique, non-repetitive writing styles, distinguishing it from other RP-focused models.

Ideal Use Cases

  • Interactive Storytelling & Roleplay: Excels in applications requiring dynamic, creative, and sustained character interactions.
  • Long-form Creative Content Generation: Suitable for generating varied narratives and dialogues over extended conversational turns.
  • Applications requiring nuanced reasoning in conversational contexts.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p