RicardoEstep/RPBizkit-v5-12B-Lorablated Overview
This 12 billion parameter model, created by RicardoEstep, is an experimental mix designed for roleplay. It was developed using a two-part merging process to combine characteristics from numerous existing models.
Key Capabilities & Technical Details
- Hybrid Merging: The model is a unique blend, first created by merging 18 different "RP Uncensored" models using the Karcher-Mean method with Mergekit, ensuring equal importance for all merged data. This initial merge was then further processed with a custom Python script that applied a specific LoRA (
nbeerbower/Mistral-Nemo-12B-abliterated-LORA) using a hybrid scaling approach (0.7 for attention, 0.3 for MLP). - Tokenizer & Embeddings: Features clean tokenizer and embedding sizes (131072) based on Mistral. However, it's noted that the model may drift if standard ChatML or Mistral chat templates are used.
- Recommended Chat Template: The Alpaca template with "RAW" inputs is recommended for optimal performance, with configuration files pre-tweaked to avoid other chat templates.
- Context Length: While theoretically supporting a 128K context size, the recommended maximum context size for stable performance is 8K (8192) tokens.
Good For
- Roleplay Scenarios: Specifically engineered by combining various "RP Uncensored" models, making it suitable for diverse and potentially unconstrained roleplaying applications.
- Experimental Use Cases: Ideal for users interested in exploring models created through complex merging and LoRA application techniques.