ModelsLab/Llama-3.1-8b-Uncensored-Dare
Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Jul 31, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

ModelsLab/Llama-3.1-8b-Uncensored-Dare is an 8 billion parameter language model created by ModelsLab, formed by merging several Llama-3.1-8B-Instruct-Uncensored and Llama-3-8B-Lexi-Uncensored variants using the DARE TIES merge method. This model is designed for uncensored instruction-following tasks, leveraging the combined strengths of its constituent models. It offers a 32768 token context length, making it suitable for applications requiring extensive conversational memory or processing longer texts.

Loading preview...

ModelsLab/Llama-3.1-8b-Uncensored-Dare Overview

ModelsLab/Llama-3.1-8b-Uncensored-Dare is an 8 billion parameter language model developed by ModelsLab. It is a product of a sophisticated merge operation using the DARE TIES method, combining multiple specialized Llama-3.1-8B and Llama-3-8B variants. This approach aims to consolidate the strengths of several uncensored instruction-tuned models into a single, more versatile offering.

Key Characteristics

  • Merged Architecture: Built upon the Llama-3.1 and Llama-3 families, integrating contributions from:
    • aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored
    • aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored
    • Orenguteng/Llama-3-8B-Lexi-Uncensored
    • aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.1-Uncensored
  • Uncensored Design: Specifically engineered to provide responses without inherent content restrictions, making it suitable for use cases where unfiltered output is desired or necessary.
  • Instruction Following: Inherits and enhances instruction-following capabilities from its base models, designed to respond accurately to user prompts and commands.
  • Context Length: Supports a substantial context window of 32768 tokens, enabling it to handle longer conversations and more complex, multi-turn interactions.

Intended Use Cases

This model is particularly well-suited for applications requiring:

  • Unrestricted Content Generation: Scenarios where the model needs to generate responses without built-in censorship or safety filters.
  • Advanced Instruction Following: Tasks that benefit from a model capable of understanding and executing complex instructions over extended contexts.
  • Creative and Roleplay Applications: Its uncensored nature and robust instruction-following make it a strong candidate for creative writing, interactive storytelling, and detailed role-playing scenarios.
Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p