aifeifei798/llama3-8B-DarkIdol-2.2-Uncensored-1048K

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Jul 1, 2024License:llama3Architecture:Transformer0.0K Warm

The aifeifei798/llama3-8B-DarkIdol-2.2-Uncensored-1048K is an 8 billion parameter Llama 3-based model developed by aifeifei798, created through a series of merges using the Model Stock method. It features an extended context length of 8192 tokens and is specifically optimized for uncensored, long-form role-playing scenarios, creative writing, and multilingual interactions, including Chinese, Japanese, and Korean. This model is designed for quick responses and adaptability across various virtual idol and conversational applications, with a focus on maintaining performance even with custom quantization for CPU-only inference.

Loading preview...

Model Overview

The aifeifei798/llama3-8B-DarkIdol-2.2-Uncensored-1048K is an 8 billion parameter language model built upon the Llama 3 architecture, developed by aifeifei798. This model is the result of multiple merges using the Model Stock method, integrating various specialized base models to enhance its capabilities. A key feature is its extended context window of 8192 tokens, designed to support very lengthy and detailed conversations, particularly for role-playing and virtual idol interactions.

Key Capabilities

  • Uncensored Responses: Engineered to provide uncensored outputs, suitable for diverse and unrestricted conversational flows.
  • Extended Context: Optimized for long-form interactions with an 8192-token context length, preventing abrupt cut-offs in extended dialogues.
  • Role-playing Specialization: Highly adapted for various role-playing scenarios, including those for virtual idol management and creative storytelling.
  • Multilingual Support: Includes optimizations for Chinese, Japanese, and Korean languages, alongside enhanced logical processing.
  • Custom Quantization: Features unique quantization methods that prioritize output and embed tensors to f16, aiming for minimal degradation and smaller file sizes for efficient CPU-only inference.
  • Vision Capabilities: Supports multimodal vision inputs when used with compatible tools like Koboldcpp and a specified mmproj file.

Good For

  • Virtual Idol Management: Assisting with virtual idol Twitter accounts, singing, and managing extensive conversational interactions.
  • Creative Writing: Generating scholarly responses, extensive songs, and fantasy writing.
  • Unrestricted Role-play: Engaging in diverse and imaginative role-playing scenarios without content filtering.
  • Multilingual Applications: Developing applications requiring robust performance in Chinese, Japanese, and Korean contexts.
  • Local Inference: Users seeking a performant 8B model that can run efficiently on CPU-only setups with custom GGUF quantizations.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p