Greytechai/Llama-3.1-8B-Lexi-Uncensored-V2
Greytechai/Llama-3.1-8B-Lexi-Uncensored-V2 is an 8 billion parameter language model based on Meta's Llama-3.1-8B-Instruct architecture, featuring a 32768 token context length. This model is uncensored and designed for high compliance with user requests, including potentially unethical ones, making it suitable for applications where explicit content filtering is managed externally. It is intended for developers who require a highly responsive model and are responsible for implementing their own alignment layers.
Loading preview...
Greytechai/Llama-3.1-8B-Lexi-Uncensored-V2 Overview
This model, developed by Greytechai, is a fine-tuned version of Meta's Llama-3.1-8B-Instruct, featuring 8 billion parameters and a 32768 token context length. Lexi is explicitly designed to be uncensored and highly compliant with all user prompts, including those that might be considered unethical. Users are advised to implement their own alignment and safety layers when deploying this model in applications.
Key Characteristics
- Uncensored Nature: Lexi is built for maximum compliance, responding to a wide range of requests without inherent refusal mechanisms.
- Llama 3.1 Base: Leverages the robust architecture of Llama-3.1-8B-Instruct.
- System Prompt Flexibility: Optimized for best responses with a logical reasoning system prompt, but also supports minimal system prompts for more uncensored outputs.
- Licensing: Governed by the Meta Llama 3.1 Community License Agreement, allowing for commercial use in accordance with Meta's terms.
Performance & Considerations
Open LLM Leaderboard evaluations show an average score of 27.93, with specific metrics including IFEval (0-Shot) at 77.92 and MMLU-PRO (5-shot) at 30.90. The developer notes that quantized versions (Q4) may exhibit refusal issues due to fine-tune loss, recommending F16 or Q8 for optimal performance. Users are responsible for content generated and are encouraged to provide feedback for future improvements.