RicardoEstep/RPBizkit-v5-12B-Lorablated

TEXT GENERATIONConcurrency Cost:1Model Size:12BQuant:FP8Ctx Length:32kPublished:Feb 17, 2026Architecture:Transformer0.0K Cold

RicardoEstep/RPBizkit-v5-12B-Lorablated is a 12 billion parameter experimental language model created by RicardoEstep, built through a two-part merging process using Karcher-Mean with Mergekit and a custom Python script. This model combines 18 different "RP Uncensored" models and integrates a LoRA for hybrid scaling. It is specifically designed for roleplay scenarios, offering a unique blend of characteristics from its diverse base models.

Loading preview...

RicardoEstep/RPBizkit-v5-12B-Lorablated Overview

This 12 billion parameter model, created by RicardoEstep, is an experimental mix designed for roleplay. It was developed using a two-part merging process to combine characteristics from numerous existing models.

Key Capabilities & Technical Details

  • Hybrid Merging: The model is a unique blend, first created by merging 18 different "RP Uncensored" models using the Karcher-Mean method with Mergekit, ensuring equal importance for all merged data. This initial merge was then further processed with a custom Python script that applied a specific LoRA (nbeerbower/Mistral-Nemo-12B-abliterated-LORA) using a hybrid scaling approach (0.7 for attention, 0.3 for MLP).
  • Tokenizer & Embeddings: Features clean tokenizer and embedding sizes (131072) based on Mistral. However, it's noted that the model may drift if standard ChatML or Mistral chat templates are used.
  • Recommended Chat Template: The Alpaca template with "RAW" inputs is recommended for optimal performance, with configuration files pre-tweaked to avoid other chat templates.
  • Context Length: While theoretically supporting a 128K context size, the recommended maximum context size for stable performance is 8K (8192) tokens.

Good For

  • Roleplay Scenarios: Specifically engineered by combining various "RP Uncensored" models, making it suitable for diverse and potentially unconstrained roleplaying applications.
  • Experimental Use Cases: Ideal for users interested in exploring models created through complex merging and LoRA application techniques.