Overview
royallab/Pygmalion-2-13b-SuperCOT2 is a 13 billion parameter language model created by merging the Pygmalion-2-13b base model with Ausboss's Llama2 SuperCOT2 loras. The merge was executed using a commandline version of EzTrainer by CoffeeVampire/Blackroot via zaraki-tools by Zaraki. This iteration aims to make Pygmalion 'smarter' by integrating the SuperCOT2 lora, which was trained closer to the original SuperCOT llama1.
Key Capabilities
- Enhanced Roleplaying: Specifically designed to improve the conversational and reasoning abilities of the Pygmalion base model for roleplaying contexts.
- Instruction Format Compatibility: Supports both Metharme and Alpaca instruction formats, allowing for flexible integration into various applications.
- Text Adventure Generation: Suitable for generating interactive text adventure scenarios and responses.
Intended Use Cases
- Interactive Roleplay: Ideal for applications requiring dynamic and intelligent character interactions.
- Text-Based Games: Can be used to power text adventure games by generating scenarios and options.
Limitations
- Bias: Exhibits biases similar to those found in niche online roleplaying communities, in addition to the base model's biases.
- Factual Accuracy: Not intended for providing factual information or advice.