aifeifei799/Llama-3.1-8B-Instruct-Fei-v1-Uncensored
Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Jul 27, 2024License:llama3.1Architecture:Transformer0.0K Warm

The aifeifei799/Llama-3.1-8B-Instruct-Fei-v1-Uncensored is an uncensored instruction-tuned variant of Meta's Llama 3.1 8B model. This 8 billion parameter model is built on an optimized transformer architecture with a 128k context length, trained on over 15 trillion tokens of multilingual data. It is specifically designed for uncensored dialogue use cases, offering enhanced flexibility for developers in various applications.

Loading preview...

Model Overview

This model, aifeifei799/Llama-3.1-8B-Instruct-Fei-v1-Uncensored, is an uncensored instruction-tuned version of Meta's Llama 3.1 8B. It leverages the Llama 3.1 architecture, an 8 billion parameter auto-regressive language model with an optimized transformer design, supporting a substantial 128k context length. The base Llama 3.1 models were trained on over 15 trillion tokens of publicly available online data, with a knowledge cutoff of December 2023, and fine-tuned using supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF).

Key Capabilities

  • Uncensored Instruction Following: Specifically modified to provide uncensored responses, offering greater flexibility for developers.
  • Multilingual Support: Supports English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai, with potential for other languages through fine-tuning.
  • Extended Context Window: Features a 128k token context length, enabling processing of longer inputs and generating more extensive outputs.
  • Improved Performance: The Llama 3.1 8B Instruct model shows notable improvements over Llama 3 8B Instruct in benchmarks like MMLU (69.4 vs 68.5), HumanEval (72.6 vs 60.4), and MATH (51.9 vs 29.1).

Good For

  • Unrestricted Dialogue Applications: Ideal for use cases requiring responses without inherent censorship or safety guardrails present in standard instruction-tuned models.
  • Research and Development: Provides a valuable resource for studying the impact of uncensored models and developing custom safety policies.
  • Multilingual Chatbots and Assistants: Suitable for building assistant-like chat applications across its supported languages, especially where a less restrictive output is desired.
Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p