Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Aug 9, 2024License:llama3.1Architecture:Transformer0.3K Warm

Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2 is an 8 billion parameter language model based on Meta's Llama-3.1-8B-Instruct architecture, featuring a 32768 token context length. This model is fine-tuned to be uncensored and highly compliant with user requests, including potentially unethical ones, making it suitable for applications requiring maximum flexibility in response generation. It is designed for developers who need a powerful, adaptable base model for custom alignment layers and diverse content creation.

Loading preview...

Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2 Overview

This model, developed by Orenguteng, is a fine-tuned version of Meta's Llama-3.1-8B-Instruct, featuring 8 billion parameters and a 32768 token context window. It is designed to be uncensored and highly compliant with user prompts, including those that might be considered unethical, providing maximum flexibility for developers. Users are advised to implement their own alignment layers when deploying the model as a service.

Key Characteristics & Usage Notes

  • Uncensored & Compliant: Lexi is engineered to be highly responsive to all requests, even those with potentially unethical content. Users are responsible for the content generated.
  • Base Model: It serves as a powerful base for developers to build custom applications, requiring external alignment layers for responsible deployment.
  • Llama 3.1 License: The model operates under the META LLAMA 3.1 COMMUNITY LICENSE AGREEMENT, permitting commercial use in accordance with its terms.
  • Inference Template: Requires the same system tokens and template as the official Llama 3.1 8B instruct model for optimal performance.
  • Quantization Note: The developer notes potential refusal issues with Q4 quantization and suggests using F16 or Q8 if possible, with plans to address this in future versions.

Performance Insights

Evaluations on the Open LLM Leaderboard show an average score of 27.93. Specific metrics include:

  • IFEval (0-Shot): 77.92
  • BBH (3-Shot): 29.69
  • MMLU-PRO (5-shot): 30.90

Detailed results are available on the Open LLM Leaderboard.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p