ENERGY-DRINK-LOVE/SOLAR_merge2
Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:10.7BQuant:FP8Ctx Length:4kLicense:apache-2.0Architecture:Transformer Open Weights Warm

ENERGY-DRINK-LOVE/SOLAR_merge2 is a merge model, serving as the base for the SOLAR_merge2_dpo variant. This model demonstrates strong performance on the Ko-LLM-Leaderboard, achieving an average score of 53.03 across various Korean language benchmarks. It is particularly optimized for general-purpose Korean language understanding and generation tasks, making it suitable for applications requiring robust performance in Korean NLP.

Loading preview...

Model Overview

ENERGY-DRINK-LOVE/SOLAR_merge2 is a merged language model, specifically designed to serve as the foundational base for the ENERGY-DRINK-LOVE/SOLAR_merge2_dpo model. This model focuses on strong performance in Korean language tasks.

Key Capabilities & Performance

The model has been evaluated on the Ko-LLM-Leaderboard, demonstrating competitive results. As of December 29, 2024, it ranked 2nd on the leaderboard, indicating its proficiency in various Korean NLP challenges. Its average score across the benchmarks is 53.03.

Specific benchmark scores include:

  • Ko-ARC: 48.81
  • Ko-HellaSwag: 55.96
  • Ko-MMLU: 54.32
  • Ko-TruthfulQA: 49.04
  • Ko-CommonGen V2: 57.02

These scores highlight its balanced performance across reasoning, common sense, language understanding, and generation tasks in Korean.

Use Cases

This model is well-suited for applications requiring a robust Korean language foundation, particularly for tasks that benefit from strong general-purpose understanding and generation capabilities. Its performance on the Ko-LLM-Leaderboard suggests its utility in various Korean NLP scenarios.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p