informatiker/Llama-3-8B-Instruct-abliterated

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Jul 8, 2024Architecture:Transformer Cold

informatiker/Llama-3-8B-Instruct-abliterated is an 8 billion parameter language model based on Meta's Llama-3-8B-Instruct architecture, specifically modified to remove refusal vectors. This "abliterated" version is designed to minimize query refusals, even for potentially sensitive prompts, making it suitable for applications requiring less restrictive content filtering. It maintains the 8192 token context length of its base model.

Loading preview...

informatiker/Llama-3-8B-Instruct-abliterated: Refusal-Vector-Removed Llama-3

This model is a specialized version of Meta's Llama-3-8B-Instruct, featuring a significant modification: its refusal vectors have been "abliterated". This means the model is engineered to largely avoid refusing queries, even those that might typically trigger content policy filters in standard instruction-tuned models. When used with the recommended system prompt, its propensity to refuse queries is further reduced.

Key Capabilities

  • Minimized Refusals: Designed to process a broader range of user prompts without generating refusal responses.
  • Llama-3-8B-Instruct Base: Inherits the general language understanding and generation capabilities of the original Llama-3-8B-Instruct model.
  • 8B Parameters: Offers a balance of performance and computational efficiency for various tasks.

Good for

  • Unfiltered Content Generation: Use cases where strict content refusal is undesirable or needs to be bypassed.
  • Research into Model Behavior: Studying the effects of removing refusal mechanisms on LLM responses.
  • Specific Applications: Scenarios requiring a highly compliant model that will attempt to answer almost any query.