Tarek07/Legion-V2.1-LLaMa-70B

Warm
Public
70B
FP8
32768
License: llama3.3
Hugging Face
Overview

Overview

Tarek07/Legion-V2.1-LLaMa-70B is a 70 billion parameter language model developed by Tarek07, created through a "Hyper Multi Model Merge" process. This model combines 20 distinct specialized models, aiming to deliver a highly versatile and uncensored AI experience. The merge methodology involved creating five core specialized models—an uncensored base, an intelligent model (based on UGI, Willingness, and NatInt scores), a descriptive writing model, a roleplay (RP) model, and a final "unhinged, uncensored" component—which were then iteratively refined and combined to form Legion.

Key Capabilities

  • Uncensored Generation: Designed to be completely uncensored, offering unrestricted content generation.
  • Enhanced Intelligence: Incorporates models optimized for intelligence, drawing from UGI, Willingness, and NatInt leaderboards.
  • Descriptive Writing: Excels in generating creative and natural prose, suitable for detailed narrative tasks.
  • Roleplay Specialization: Specifically merged with fine-tuned models using extensive RP datasets, making it adept at roleplaying scenarios.
  • Hyper-Merge Architecture: Utilizes the DARE TIES merge method, building upon TareksLab/L-BASE-V1 as its foundation, integrating multiple specialized components.

Usage Recommendations

  • Quantization: It is recommended to run this model on nothing lower than a Q5 quant due to its complex merged nature.
  • Settings: Suggested inference settings include a Temperature of 1.0 and a Min P of 0.02.

Merge Details

The model was constructed using mergekit and the DARE TIES method. The primary base model for the merge was TareksLab/L-BASE-V1, with contributions from TareksLab/L2-MERGE4, TareksLab/L2-MERGE1, TareksLab/L2-MERGE3, and TareksLab/L2-MERGE2a, each weighted at 0.20 with a density of 0.5.