TheBloke/CAMEL-13B-Role-Playing-Data-fp16
TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kLicense:otherArchitecture:Transformer0.0K Cold

CAMEL-13B-Role-Playing-Data-fp16 is a 13 billion parameter chat large language model developed by Camel AI, fine-tuned from the LLaMA-13B architecture. This model is specifically optimized for role-playing conversations, having been trained on 229K role-playing dialogues. It demonstrates strong performance, outperforming LLaMA-30B on an average benchmark score of 57.2, making it suitable for conversational AI applications requiring nuanced character interaction.

Loading preview...

Overview

CAMEL-13B-Role-Playing-Data-fp16 is a 13 billion parameter chat-optimized large language model, developed by Camel AI. It is a fine-tuned version of the LLaMA-13B model, specifically trained to excel in role-playing scenarios. The model leverages a dataset of 229,000 conversations generated through Camel AI's unique role-playing framework.

Key Capabilities

  • Role-Playing Specialization: The model is explicitly fine-tuned on extensive role-playing data, making it highly capable of generating contextually appropriate and engaging dialogues for character-based interactions.
  • Performance: Despite its 13B parameter count, CAMEL-13B-Role-Playing-Data achieves an average score of 57.2 on the EleutherAI language model evaluation harness, surpassing the performance of the larger LLaMA-30B model (56.9) in evaluated benchmarks.
  • Benchmark Results: Key performance metrics include:
    • ARC-C: 54.9
    • HellaSwag: 79.3
    • MMLU: 48.5
    • TruthfulQA: 46.2

Good For

  • Conversational AI: Ideal for applications requiring dynamic and context-rich dialogue generation.
  • Interactive Storytelling: Can be used to power characters in games, simulations, or interactive narratives.
  • Chatbots with Persona: Suitable for creating chatbots that need to maintain specific roles or personalities.
  • Research in Role-Playing: Provides a strong base model for further experimentation and fine-tuning in role-playing AI.