DS-R1-Distill-70B-ArliAI-RpR-v4-Large is a 70-billion parameter language model developed by ArliAI, built upon the deepseek-ai/DeepSeek-R1-Distill-Llama-70B base model with a 32K context length. This model is fine-tuned using the RpR (RolePlay with Reasoning) v4 dataset, specifically designed to enhance creative writing and roleplay capabilities while integrating reasoning abilities for coherent, multi-turn conversations. It focuses on reducing cross-context repetition and impersonation, offering a unique, non-repetitive writing style for complex narrative interactions.
No reviews yet. Be the first to review!