Himitsui/Kaiju-11B
Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:10.7BQuant:FP8Ctx Length:4kPublished:Feb 13, 2024License:cc-by-nc-4.0Architecture:Transformer0.0K Open Weights Warm

Himitsui/Kaiju-11B is an 11 billion parameter language model created by Himitsui, developed using Gryphe's MergeMonster tool. This model is specifically engineered to reduce 'GPT-isms' and positivity bias often found in roleplay scenarios, leveraging merges of popular base models like Sao's and Kuromitsu, alongside an Instruct-Uncensored tune. It is optimized for nuanced and less predictable conversational outputs, particularly in roleplaying contexts, and supports Alpaca and Vicuna instruction formats.

Loading preview...

Kaiju-11B: A Roleplay-Optimized Merge Model

Kaiju-11B is an experimental language model developed by Himitsui using Gryphe's MergeMonster tool. The primary goal of this 11 billion parameter model is to mitigate common 'GPT-isms' and positivity bias, which can lead to predictable or overly 'slop' outputs, especially in roleplaying applications. It achieves this by merging several base models, including popular models from Sao10K, Kuromitsu, and a specific Instruct-Uncensored tune, aiming for more dynamic and less constrained conversational responses.

Key Capabilities

  • Reduced 'GPT-isms': Engineered to produce less generic and more varied text, moving away from common AI-generated patterns.
  • Bias Mitigation: Specifically targets and reduces positivity bias, enhancing its suitability for diverse narrative and roleplay scenarios.
  • Flexible Instruction Formats: Compatible with both Alpaca and Vicuna instruction formats, ensuring broad usability with existing tools and workflows.
  • Roleplay Optimization: Designed with a focus on improving the quality and naturalness of roleplay interactions.

Good For

  • Developers and users seeking a model for creative writing and roleplaying that offers more nuanced and less predictable outputs.
  • Applications requiring a model with reduced inherent positivity bias.
  • Integration into platforms like SillyTavern, with a recommendation for the Universal-Light preset.
Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p