Nexesenex/Llama_3.x_70b_Evasion_V1

Hugging Face
TEXT GENERATIONConcurrency Cost:4Model Size:70BQuant:FP8Ctx Length:32kPublished:Feb 16, 2025Architecture:Transformer0.0K Warm

Nexesenex/Llama_3.x_70b_Evasion_V1 is a 70 billion parameter language model created by Nexesenex using the Model Stock merge method. It is based on EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1 and incorporates NousResearch/Hermes-3-Llama-3.1-70B and Nexesenex/Llama_3.x_70b_Smarteaz_V1. This model is designed to combine the strengths of its constituent models, offering a versatile foundation for various generative AI applications.

Loading preview...

Model Overview

Nexesenex/Llama_3.x_70b_Evasion_V1 is a 70 billion parameter language model developed by Nexesenex. This model was constructed using the Model Stock merge method, a technique described in the research paper "Model Stock", and implemented via the mergekit tool.

Merge Details

The base model for this merge is EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1. Two primary models were integrated into this base to create the final Evasion_V1 model:

  • NousResearch/Hermes-3-Llama-3.1-70B: A prominent instruction-tuned model from NousResearch.
  • Nexesenex/Llama_3.x_70b_Smarteaz_V1: Another 70 billion parameter model from Nexesenex.

Both merged models were given equal weighting (1.0) during the Model Stock process, aiming to combine their respective capabilities. The merge was performed using bfloat16 data type and normalized parameters.

Potential Applications

Given its foundation in Llama 3.x architecture and the integration of instruction-tuned models, Nexesenex/Llama_3.x_70b_Evasion_V1 is likely suitable for a range of generative tasks, including:

  • General text generation
  • Instruction following
  • Conversational AI
  • Content creation

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p