RicardoEstep/RPBizkit-v4-12B
RicardoEstep/RPBizkit-v4-12B is a 12 billion parameter experimental Karcher-Mean merge of several 'RP Uncensored' models, created by RicardoEstep using Mergekit. This model is specifically designed for roleplay scenarios, leveraging a blend of models like Krix-12B, AngelSlayer-12B, and EtherealAurora-12B. It operates with a recommended maximum context size of 8K tokens, despite some underlying models having 'fake rope_theta hack' configurations for larger contexts. The model's tokenizer is noted as complex, requiring specific chat templates like Alpaca with 'RAW' inputs for optimal performance.
Loading preview...
RicardoEstep/RPBizkit-v4-12B Overview
This model is an experimental 12 billion parameter merge, created by RicardoEstep using the Mergekit tool with a Karcher-Mean method. It combines several known "RP Uncensored" models, including DreadPoor/Krix-12B-Model_Stock, redrix/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v3, yamatazen/EtherealAurora-12B, and others, with all constituent models and their data given equal importance in the merge.
Key Characteristics
- Merge Method: Utilizes a Karcher-Mean merge, known for producing stable and high-quality results.
- Context Length: The recommended maximum context size is 8K (8192) tokens. While some merged models used a "fake rope_theta hack" for larger contexts, this model does not provide meaningful long-context behavior beyond 8K.
- Tokenizer Behavior: The tokenizer is noted as complex, and the model will drift if standard ChatML or Mistral chat templates are used. The recommended chat template is "Alpaca (with 'RAW' inputs)", and the configuration files are tweaked to avoid using any chat template by default.
Use Cases
This model is primarily intended for roleplay (RP) scenarios, given its foundation in "RP Uncensored" models. Users should be aware of the specific tokenizer and chat template requirements for optimal interaction.