royallab/Pygmalion-2-13b-SuperCOT2

TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kLicense:llama2Architecture:Transformer0.0K Open Weights Cold

royallab/Pygmalion-2-13b-SuperCOT2 is a 13 billion parameter language model, merged from Pygmalion-2-13b and Ausboss's Llama2 SuperCOT2 loras. This model is designed to enhance the 'smartness' of Pygmalion for roleplaying scenarios, specifically aiming to improve its conversational and reasoning capabilities. It is optimized for interactive text adventures and roleplay, supporting Metharme and Alpaca instruction formats.

Loading preview...

Overview

royallab/Pygmalion-2-13b-SuperCOT2 is a 13 billion parameter language model created by merging the Pygmalion-2-13b base model with Ausboss's Llama2 SuperCOT2 loras. The merge was executed using a commandline version of EzTrainer by CoffeeVampire/Blackroot via zaraki-tools by Zaraki. This iteration aims to make Pygmalion 'smarter' by integrating the SuperCOT2 lora, which was trained closer to the original SuperCOT llama1.

Key Capabilities

  • Enhanced Roleplaying: Specifically designed to improve the conversational and reasoning abilities of the Pygmalion base model for roleplaying contexts.
  • Instruction Format Compatibility: Supports both Metharme and Alpaca instruction formats, allowing for flexible integration into various applications.
  • Text Adventure Generation: Suitable for generating interactive text adventure scenarios and responses.

Intended Use Cases

  • Interactive Roleplay: Ideal for applications requiring dynamic and intelligent character interactions.
  • Text-Based Games: Can be used to power text adventure games by generating scenarios and options.

Limitations

  • Bias: Exhibits biases similar to those found in niche online roleplaying communities, in addition to the base model's biases.
  • Factual Accuracy: Not intended for providing factual information or advice.