huihui-ai/DeepHermes-3-Llama-3-8B-Preview-abliterated

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kLicense:llama3Architecture:Transformer0.0K Warm

The huihui-ai/DeepHermes-3-Llama-3-8B-Preview-abliterated is an 8 billion parameter Llama-3-based causal language model, derived from NousResearch/DeepHermes-3-Llama-3-8B-Preview. This model has been specifically modified using an 'abliteration' technique to remove refusal behaviors, making it an uncensored version. It features a 32768-token context length and is primarily designed for applications requiring a less restrictive language model.

Loading preview...

Model Overview

huihui-ai/DeepHermes-3-Llama-3-8B-Preview-abliterated is an 8 billion parameter language model based on the Llama-3 architecture, specifically a modified version of NousResearch/DeepHermes-3-Llama-3-8B-Preview. Its key differentiator is the application of an "abliteration" technique, a proof-of-concept implementation aimed at removing refusal behaviors from the LLM without relying on TransformerLens. This process results in an uncensored model variant.

Key Characteristics

  • Base Model: Llama-3-8B-Preview architecture.
  • Parameter Count: 8 billion parameters.
  • Context Length: Supports a substantial 32768 tokens.
  • Uncensored Output: Modified to remove refusal tendencies, offering less restrictive responses.
  • Abliteration Technique: Utilizes a specific method (detailed in remove-refusals-with-transformers) for behavior modification.

Potential Use Cases

This model is suitable for developers and researchers interested in:

  • Exploring the effects of refusal removal techniques on LLMs.
  • Applications requiring a language model with fewer built-in content restrictions.
  • Experimenting with uncensored model outputs for specific research or creative tasks.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p