QuixiAI/Wizard-Vicuna-7B-Uncensored

Cold
Public
7B
FP8
4096
1
May 18, 2023
License: other
Hugging Face

QuixiAI/Wizard-Vicuna-7B-Uncensored is a 7 billion parameter language model based on the Wizard-Vicuna architecture, fine-tuned from LLaMA-7B. This model is specifically designed with removed alignment and moralizing responses, allowing for custom alignment to be applied separately. It is intended for use cases where developers require a base model without inherent guardrails, enabling flexible application-specific moderation.

Overview

Overview

QuixiAI/Wizard-Vicuna-7B-Uncensored is a 7 billion parameter language model derived from the Wizard-Vicuna-13B architecture, specifically fine-tuned from LLaMA-7B. Its core differentiator is the deliberate removal of alignment and moralizing responses during training. This design choice allows developers to implement their own alignment mechanisms, such as RLHF LoRAs, tailored to specific application requirements.

Key Characteristics

  • Uncensored Base: Trained to be free of inherent guardrails, offering maximum flexibility for custom alignment.
  • Wizard-Vicuna Foundation: Leverages the conversational capabilities of the Wizard-Vicuna model family.
  • Customizable Alignment: Designed for scenarios where developers need to apply their own ethical or content moderation layers.

Performance Benchmarks

Evaluations on the Open LLM Leaderboard show the model's performance across various tasks:

  • Avg. Score: 48.27
  • ARC (25-shot): 53.41
  • HellaSwag (10-shot): 78.85
  • MMLU (5-shot): 37.09
  • TruthfulQA (0-shot): 43.48
  • Winogrande (5-shot): 72.22
  • GSM8K (5-shot): 4.55

Good For

  • Developers who need a highly flexible base model to implement custom safety and alignment features.
  • Research into different alignment techniques without interference from pre-existing model guardrails.
  • Applications requiring specific content generation policies that differ from standard model alignments.