Nexesenex/Llama_3.x_70b_Doberman_V1
Hugging Face
TEXT GENERATIONConcurrency Cost:4Model Size:70BQuant:FP8Ctx Length:32kArchitecture:Transformer0.0K Warm

Nexesenex/Llama_3.x_70b_Doberman_V1 is a 70 billion parameter language model created by Nexesenex, merged using the Model Stock method with SentientAGI/Dobby-Unhinged-Llama-3.3-70B as its base. This model integrates capabilities from NousResearch/Hermes-3-Llama-3.1-70B and Nexesenex/Llama_3.x_70b_Smarteaz_V1, offering a 32768 token context length. It is designed for general-purpose language tasks, leveraging the combined strengths of its constituent models.

Loading preview...

Overview

Nexesenex/Llama_3.x_70b_Doberman_V1 is a 70 billion parameter language model developed by Nexesenex. It was constructed using the Model Stock merging method, leveraging SentientAGI/Dobby-Unhinged-Llama-3.3-70B as its foundational base model. This approach combines the strengths of multiple pre-trained models to enhance overall performance and capabilities.

Key Capabilities

  • Merged Architecture: Integrates the linguistic and reasoning abilities of NousResearch/Hermes-3-Llama-3.1-70B and Nexesenex/Llama_3.x_70b_Smarteaz_V1.
  • Extended Context: Features a 32768 token context length, suitable for processing longer inputs and generating more coherent, extended responses.
  • General-Purpose Utility: Designed to handle a broad spectrum of language understanding and generation tasks, benefiting from the diverse training of its merged components.

Good For

  • Applications requiring a robust 70B parameter model with a substantial context window.
  • Tasks that can benefit from the combined characteristics of the Hermes-3 and Smarteaz models.
  • Developers looking for a merged model built on a Llama 3.x base for general language processing.
Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p