royallab/Pygmalion-2-13b-SuperCOT

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kPublished:Sep 8, 2023License:llama2Architecture:Transformer0.0K Open Weights Warm

Pygmalion-2-13b-SuperCOT is a 13 billion parameter language model developed by royallab, created by merging Pygmalion 2 13b with Ausboss's Llama2 SuperCOT loras. This merge aims to enhance the base Pygmalion-2 model's intelligence and reduce conversational drift. It is specifically optimized for roleplaying scenarios, leveraging the SuperCOT lora for improved coherence.

Loading preview...

Overview

Pygmalion-2-13b-SuperCOT is a 13 billion parameter language model developed by royallab, resulting from a merge of two distinct models: Pygmalion 2 13b and Ausboss's Llama2 SuperCOT loras. The primary objective of this merge was to improve the intelligence and reduce conversational drift of the base Pygmalion-2 model, specifically by integrating the SuperCOT lora at a weight of 1.00.

Key Capabilities

  • Enhanced Coherence: The SuperCOT lora integration is intended to make the model "smarter" and less prone to drifting off-topic during interactions.
  • Roleplaying Optimization: Inherits the roleplaying capabilities of the Pygmalion-2 base model, with added stability from the SuperCOT merge.
  • Instruction Formats: Supports common instruction formats such as Metharme and Alpaca, facilitating integration into various applications.

Intended Use Cases

  • Interactive Roleplaying: Designed for generating engaging and coherent responses in text-based roleplaying scenarios.
  • Conversational AI: Suitable for applications requiring sustained, context-aware dialogue where reducing conversational drift is crucial.

Limitations

  • Bias and Risks: The model exhibits biases similar to those found in niche online roleplaying communities, in addition to the biases present in its base models.
  • Not for Factual Information: It is explicitly not intended for providing factual information or advice of any kind.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p