lodrick-the-lafted/Limon-8B

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kLicense:apache-2.0Architecture:Transformer Open Weights Warm

Limon-8B is an 8 billion parameter language model developed by lodrick-the-lafted, based on the Olethros-8B architecture. This model features specific ablations where the harmless dataset used was LimaRP and the harmful dataset was tatsu-lab/alpaca. With an 8192-token context length, it is designed for tasks requiring nuanced handling of safety and harmlessness datasets.

Loading preview...

Limon-8B Overview

Limon-8B is an 8 billion parameter language model developed by lodrick-the-lafted, building upon the foundation of their Olethros-8B model. It incorporates a notable modification in its training methodology, specifically concerning the datasets used for safety and harmlessness tuning.

Key Characteristics

  • Parameter Count: 8 billion parameters, offering a balance between performance and computational efficiency.
  • Context Length: Supports an 8192-token context window, enabling processing of moderately long inputs and generating coherent responses.
  • Dataset Ablations: A distinguishing feature is the use of specific datasets for safety tuning: LimaRP for harmless content and tatsu-lab/alpaca for harmful content. This approach suggests an intentional focus on refining the model's responses to sensitive queries.

Potential Use Cases

  • Content Moderation: Its specific training on harmless and harmful datasets could make it suitable for applications requiring content filtering or safety assessment.
  • Research into Safety Alignment: Developers and researchers interested in the impact of different safety datasets on model behavior may find Limon-8B a valuable model for experimentation and analysis.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p