ArliAI/Llama-3.1-70B-ArliAI-RPMax-v1.3

Warm
Public
70B
FP8
32768
4
Nov 17, 2024
License: llama3.1
Hugging Face

Llama-3.1-70B-ArliAI-RPMax-v1.3 is a 70 billion parameter model developed by ArliAI, based on the Llama-3.1-70B-Instruct architecture, featuring a 128K context length. This model is specifically fine-tuned for highly creative writing and roleplay, focusing on reducing cross-context repetition to produce diverse and non-repetitive outputs. It is designed to generate varied responses without falling into predictable tropes, making it suitable for dynamic narrative generation.

Overview

ArliAI/Llama-3.1-70B-ArliAI-RPMax-v1.3: Creative Writing & Roleplay

This model is part of ArliAI's RPMax series, specifically designed for creative writing and roleplay (RP) applications. It is a 70 billion parameter model built upon the Llama-3.1-70B-Instruct base, featuring an extended context length of 128K tokens.

Key Capabilities & Differentiators

  • Reduced Cross-Context Repetition: The primary goal of RPMax models is to combat "cross-context repetition," where models repeat phrases or tropes across different scenarios. This is achieved through a unique dataset curation process that deduplicates characters and situations.
  • Enhanced Creativity: By minimizing repetition, the model aims to produce highly creative and non-repetitive outputs, offering a diverse range of responses that avoid predictable patterns.
  • Unconventional Fine-tuning: The model is trained for a single epoch with a low gradient accumulation and a relatively high learning rate. This approach, using RS-QLORA+ (64-rank 64-alpha, ~2% trainable weights), encourages learning from individual examples without over-fitting, leading to a distinct writing style.
  • Llama 3.1 Instruct Format: Utilizes the Llama 3.1 Instruct prompt format for optimal interaction.

Ideal Use Cases

  • Dynamic Roleplay: Excels in scenarios requiring varied character interactions and unpredictable narrative progression.
  • Creative Story Generation: Suitable for generating unique stories and prose without common LLM "in-bred" writing styles.
  • Applications Requiring Non-Repetitive Output: Useful where avoiding repetitive phrases or tropes is critical for user engagement.