ArliAI/DS-R1-Distill-70B-ArliAI-RpR-v4-Large
TEXT GENERATIONConcurrency Cost:4Model Size:70BQuant:FP8Ctx Length:32kPublished:Jun 6, 2025License:llama3.3Architecture:Transformer0.0K Warm

DS-R1-Distill-70B-ArliAI-RpR-v4-Large is a 70-billion parameter language model developed by ArliAI, built upon the deepseek-ai/DeepSeek-R1-Distill-Llama-70B base model with a 32K context length. This model is fine-tuned using the RpR (RolePlay with Reasoning) v4 dataset, specifically designed to enhance creative writing and roleplay capabilities while integrating reasoning abilities for coherent, multi-turn conversations. It focuses on reducing cross-context repetition and impersonation, offering a unique, non-repetitive writing style for complex narrative interactions.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p