QuixiAI/WizardLM-1.0-Uncensored-Llama2-13b Overview
QuixiAI/WizardLM-1.0-Uncensored-Llama2-13b is a 13 billion parameter language model, a retrained variant of the original WizardLM/WizardLM-13B-V1.0. This version has been fine-tuned on a filtered dataset with the explicit goal of reducing inherent refusals, avoidance, and bias often found in base models. While acknowledging that LLaMA's foundational ethical beliefs mean no model is "truly uncensored," this iteration aims to be significantly more compliant and less restrictive in its responses.
Key Capabilities
- Reduced Censorship: Engineered to minimize refusals and biased responses, offering greater flexibility in content generation.
- Vicuna-1.1 Prompt Style: Utilizes the familiar
You are a helpful AI assistant.\n\nUSER: <prompt>\nASSISTANT: prompt format for consistent interaction. - Llama2-13b Base: Built upon the robust Llama2-13b architecture, inheriting its general language understanding and generation capabilities.
Good For
- Applications requiring a model with fewer built-in guardrails for content generation.
- Use cases where developers prefer to implement their own content filtering and moderation layers.
- Research into model behavior with reduced inherent biases and refusals.
Open LLM Leaderboard Performance
This model has been evaluated on the Open LLM Leaderboard, achieving an average score of 49.31. Notable scores include:
- HellaSwag (10-shot): 80.34
- MMLU (5-shot): 55.4
- Winogrande (5-shot): 74.66
For detailed results, refer to the Open LLM Leaderboard and its specific dataset details.