Overview
DS-Archive/mythalion-supercot-limarpv3-gradient-13b is a 13 billion parameter model built upon the Llama 2 architecture. It is a sophisticated merge of three distinct models: PygmalionAI/mythalion-13b, Doctor-Shotgun/llama-2-supercot-lora, and lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT. This merge was executed using PEFT adapters and Zaraki's zarakitools, with a specific goal to enhance roleplaying capabilities.
Key Capabilities & Features
- Gradient Merge Architecture: SuperCoT is integrated into deeper layers, while LimaRPv3 is applied to shallower layers, with an average weight of 0.5 for each LoRA.
- Enhanced Roleplaying: The model aims to combine the stylistic elements of Mythalion with the instruction-following and length control of LimaRPv3, making it highly suitable for detailed and controlled roleplay.
- Response Length Control: A unique feature inherited from LimaRPv3 allows users to specify desired response lengths (e.g.,
tiny, short, medium, long, huge, humongous, extreme, unlimited) directly in the prompt, providing granular control over output verbosity. - Flexible Prompt Formats: Supports multiple prompt formats, including the Alpaca instruction format from LimaRPv3 and the Pygmalion/Metharme format, offering adaptability for various use cases.
Intended Use Cases
This model is primarily intended for advanced roleplaying applications where detailed character interaction, persona adherence, and precise control over response length are crucial. It is not designed for factual information retrieval or providing advice.