TareksGraveyard/Stylizer-V2-LLaMa-70B

Warm
Public
70B
FP8
32768
Hugging Face
Overview

Overview

TareksGraveyard/Stylizer-V2-LLaMa-70B is a 70 billion parameter merged language model. It was constructed using the SCE (Sparse Component Ensemble) merge method, with huihui-ai/Llama-3.3-70B-Instruct-abliterated serving as its base model. The merge process integrates contributions from four distinct Llama-3.1 and Llama-3.3 based models, aiming to combine their respective strengths.

Merge Details

The model incorporates the following components, each contributing with a weight of 0.20:

  • nbeerbower/Llama-3.1-Nemotron-lorablated-70B
  • Sao10K/L3-70B-Euryale-v2.1
  • mlabonne/Hermes-3-Llama-3.1-70B-lorablated
  • SicariusSicariiStuff/Negative_LLAMA_70B

This configuration utilizes a llama3 chat template and is set to bfloat16 data type, with tokenizer source set to union for comprehensive vocabulary coverage. The SCE method, as described in its associated paper, is designed to create robust merged models by selectively combining parameters.

Potential Use Cases

  • General Text Generation: Leveraging the diverse training of its base models for various creative and informative text tasks.
  • Instruction Following: Benefits from the instruct-tuned nature of its primary base model.
  • Exploration of Merged Model Capabilities: Useful for researchers and developers interested in the performance characteristics of SCE-merged Llama-family models.