evolveon/Mistral-7B-Instruct-v0.3-abliterated

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Oct 14, 2024Architecture:Transformer0.0K Warm

The evolveon/Mistral-7B-Instruct-v0.3-abliterated is a 7 billion parameter instruction-tuned causal language model derived from Mistral AI's Mistral-7B-Instruct-v0.3. This model has been modified using an 'abliteration' technique to remove censorship, making it an uncensored variant. It maintains a 4096 token context length and is primarily designed for applications requiring an instruction-following model without content restrictions.

Loading preview...

Overview

The evolveon/Mistral-7B-Instruct-v0.3-abliterated is a 7 billion parameter language model based on the mistralai/Mistral-7B-Instruct-v0.3 architecture. Its primary distinction lies in its modification through an 'abliteration' technique, which effectively removes inherent censorship present in the original model. This process results in an uncensored instruction-following model, offering greater flexibility for specific use cases.

Key Capabilities

  • Uncensored Output: Generates responses without the content restrictions typically found in moderated instruction-tuned models.
  • Instruction Following: Retains the strong instruction-following capabilities of the base Mistral-7B-Instruct-v0.3 model.
  • 7 Billion Parameters: Offers a balance of performance and computational efficiency for various applications.
  • 4096 Token Context: Supports processing and generating text within a reasonable context window.

Good For

  • Research and Development: Ideal for exploring the behavior of uncensored language models and their implications.
  • Creative Applications: Suitable for generating content that might be restricted by standard safety filters.
  • Specific Use Cases: Applicable in scenarios where content filtering is not desired or is managed by external systems.

This model leverages the work of @FailSpy for the abliteration technique, providing a unique option for developers seeking less constrained language generation.