ProdeusUnity/Dazzling-Star-Aurora-70b-v0.0-Experimental-0123

Warm
Public
70B
FP8
32768
License: eva-llama3.3
Hugging Face
Overview

Overview

Dazzling-Star-Aurora-70b-v0.0 is a 70 billion parameter language model created by ProdeusUnity, utilizing the Llama 3.1 instruction format. This model is a strategic merge of two distinct 70B models: EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1 and ArliAI/Llama-3.1-70B-ArliAI-RPMax-v1.3.

Merge Details

The model was constructed using the TIES merge method with mergekit. The base model for this merge was unsloth/Meta-Llama-3.1-70B. The configuration applied specific weights and densities to each merged component:

  • EVA-LLaMA-3.33-70B-v0.1: weighted at 0.3 with a density of 0.7.
  • Llama-3.1-70B-ArliAI-RPMax-v1.3: weighted at 0.4 with a density of 0.8.

This merging approach aims to combine the strengths of the constituent models, potentially enhancing performance across various language understanding and generation tasks. The model supports a context length of 32768 tokens and is intended for general-purpose applications where a large parameter count and diverse training influences are beneficial.