TareksGraveyard/Stylizer-V2-LLaMa-70B

Hugging Face
TEXT GENERATIONConcurrency Cost:4Model Size:70BQuant:FP8Ctx Length:32kPublished:Apr 14, 2025Architecture:Transformer0.0K Warm

TareksGraveyard/Stylizer-V2-LLaMa-70B is a 70 billion parameter language model created by TareksGraveyard using the SCE merge method. It is based on huihui-ai/Llama-3.3-70B-Instruct-abliterated and combines several Llama-3.1 and Llama-3.3 variants. This model is designed to leverage the combined strengths of its constituent models for general-purpose language generation tasks.

Loading preview...

Overview

TareksGraveyard/Stylizer-V2-LLaMa-70B is a 70 billion parameter merged language model. It was constructed using the SCE (Sparse Component Ensemble) merge method, with huihui-ai/Llama-3.3-70B-Instruct-abliterated serving as its base model. The merge process integrates contributions from four distinct Llama-3.1 and Llama-3.3 based models, aiming to combine their respective strengths.

Merge Details

The model incorporates the following components, each contributing with a weight of 0.20:

  • nbeerbower/Llama-3.1-Nemotron-lorablated-70B
  • Sao10K/L3-70B-Euryale-v2.1
  • mlabonne/Hermes-3-Llama-3.1-70B-lorablated
  • SicariusSicariiStuff/Negative_LLAMA_70B

This configuration utilizes a llama3 chat template and is set to bfloat16 data type, with tokenizer source set to union for comprehensive vocabulary coverage. The SCE method, as described in its associated paper, is designed to create robust merged models by selectively combining parameters.

Potential Use Cases

  • General Text Generation: Leveraging the diverse training of its base models for various creative and informative text tasks.
  • Instruction Following: Benefits from the instruct-tuned nature of its primary base model.
  • Exploration of Merged Model Capabilities: Useful for researchers and developers interested in the performance characteristics of SCE-merged Llama-family models.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p