PYAE1994/Roleplay-Llama-3-8B

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Apr 11, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

PYAE1994/Roleplay-Llama-3-8B is an 8 billion parameter Llama-3 model fine-tuned specifically for roleplay scenarios, generating dialogue interspersed with actions. It was trained using the ResplendentAI/NSFW_RP_Format_DPO dataset to produce structured roleplay outputs. This model excels in interactive narrative generation, achieving a high ranking on the Chaiverse leaderboard for its parameter size. Its primary strength lies in creating dynamic and formatted roleplay conversations.

Loading preview...

Overview

PYAE1994/Roleplay-Llama-3-8B is an 8 billion parameter Llama-3 model that has been fine-tuned for roleplay generation. Its training utilized the ResplendentAI/NSFW_RP_Format_DPO dataset, which specifically teaches the model to format its outputs with dialogue and interspersed actions (e.g., dialogue *action*).

Key Capabilities

  • Structured Roleplay Generation: Produces responses formatted with dialogue and embedded actions, ideal for interactive narrative experiences.
  • High Performance in Roleplay: Achieves a strong ranking on the Chaiverse leaderboard, notably being the top-performing 8B parameter model by ELO score as of April 23, 2024.
  • Llama-3 Architecture: Benefits from the foundational capabilities of the Llama-3 model family.

Evaluations

While optimized for roleplay, the model's general language understanding and reasoning metrics from the Open LLM Leaderboard include:

  • Avg. Score: 24.33
  • IFEval (0-Shot): 73.20
  • BBH (3-Shot): 28.55
  • MMLU-PRO (5-shot): 30.09

Good For

  • Applications requiring dynamic and formatted roleplay interactions.
  • Generating character dialogue with integrated actions for storytelling or interactive fiction.
  • Use cases where a compact yet capable roleplay-specific model is needed.