theprint/Llama-3-8B-Lexi-Smaug-Uncensored

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Jun 16, 2024License:llama3Architecture:Transformer0.0K Cold

Llama-3-8B-Lexi-Smaug-Uncensored is an 8 billion parameter language model created by theprint, resulting from a slerp merge of Orenguteng/Llama-3-8B-Lexi-Uncensored and abacusai/Llama-3-Smaug-8B. This merged model combines the characteristics of its base components, offering a versatile foundation for various natural language processing tasks. It is designed for general-purpose text generation and understanding, leveraging the strengths of both constituent models.

Loading preview...

Model Overview

theprint/Llama-3-8B-Lexi-Smaug-Uncensored is an 8 billion parameter language model developed by theprint. It is a product of a slerp merge operation, combining two distinct Llama-3-8B variants:

  • Orenguteng/Llama-3-8B-Lexi-Uncensored
  • abacusai/Llama-3-Smaug-8B

This merging approach aims to synthesize the capabilities of both base models, potentially enhancing performance across a range of applications. The merge configuration specifies a weighted combination of layers, with different weights applied to self-attention and MLP components.

Key Characteristics

  • Architecture: Based on the Llama-3 family, providing a robust foundation for language understanding and generation.
  • Parameter Count: 8 billion parameters, balancing performance with computational efficiency.
  • Merge Method: Utilizes the slerp (spherical linear interpolation) merge method, which is known for creating stable and effective combinations of models.
  • Uncensored Nature: Inherits the "uncensored" characteristic from one of its base models, suggesting a broader range of response generation without inherent content restrictions.

Usage Considerations

This model is suitable for general text generation tasks, including but not limited to creative writing, conversational AI, and content creation. Users should be aware of its uncensored nature, which may lead to outputs that are less filtered than other models. GGUF quantized versions are also available for optimized local deployment.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p