KaraKaraWitch/L3.1-70b-Inori
Hugging Face
TEXT GENERATIONConcurrency Cost:4Model Size:70BQuant:FP8Ctx Length:32kArchitecture:Transformer Warm

KaraKaraWitch/L3.1-70b-Inori is a 70 billion parameter Llama 3.1-based model merge, created by KaraKaraWitch, with a 32768 token context length. This model integrates components like Dracarys for code, Euryale, Cathallama, New Dawn, Celeste for roleplay, and Japanese-Instruct for enhanced Japanese language capabilities. It is designed as a versatile merge, though the creator notes potential inconsistencies in censorship behavior.

Loading preview...

Overview

KaraKaraWitch/L3.1-70b-Inori is a 70 billion parameter language model based on the Llama 3.1 architecture, created by KaraKaraWitch. This model is a merge of several distinct models using a "Model Stock" approach, aiming to combine their strengths. It leverages a 32768 token context length, making it suitable for processing longer inputs.

Key Components & Capabilities

Inori integrates various specialized models, including:

  • abacusai/Dracarys-Llama-3.1-70B-Instruct: Potentially useful for code generation.
  • Sao10K/L3-70B-Euryale-v2.1
  • gbueno86/Cathallama-70B
  • sophosympatheia/New-Dawn-Llama-3.1-70B-v1.1
  • nothingiisreal/L3.1-70B-Celeste-V0.1-BF16: Contributes to roleplay capabilities.
  • cyberagent/Llama-3.1-70B-Japanese-Instruct-2407: Enhances Japanese language understanding and generation.

Noteworthy Aspects

The model was created by learning from previous merges, specifically using Glitz as a base. The merge process utilized LazyMergekit. The creator notes that the model exhibits inconsistent censorship behavior, sometimes triggering content restrictions and other times not. Due to these inconsistencies, the creator does not recommend it for general use.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p