PracticeLLM/SOLAR-tail-10.7B-instruct-v1.0

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:10.7BQuant:FP8Ctx Length:4kLicense:cc-by-nc-sa-4.0Architecture:Transformer0.0K Open Weights Warm

PracticeLLM/SOLAR-tail-10.7B-instruct-v1.0 is an instruction-tuned causal language model developed by Kyujin Han. This 10.7 billion parameter model is fine-tuned from PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0 using a Korean-centric dataset. It demonstrates improved performance on various benchmarks, including MMLU and TruthfulQA, making it suitable for general instruction-following tasks, particularly with Korean language data.

Loading preview...

Overview

PracticeLLM/SOLAR-tail-10.7B-instruct-v1.0 is a 10.7 billion parameter instruction-tuned language model developed by Kyujin Han. It is built upon the PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0 base model and fine-tuned using the kyujinpy/KOR-OpenOrca-Platypus-v3 dataset, which includes Korean language data.

Key Capabilities & Performance

This model is designed for general instruction-following tasks. Benchmarks on the Open LLM Leaderboard indicate its performance:

  • Average Score: 51.70
  • ARC: 46.93
  • HellaSwag: 58.19
  • MMLU: 53.15
  • TruthfulQA: 46.52
  • Ko-CommonGenV2: 53.72

Compared to its base model, PracticeLLM/SOLAR-tail-10.7B-instruct-v1.0 shows notable improvements in MMLU and TruthfulQA scores, suggesting enhanced reasoning and factual accuracy after instruction tuning. The fine-tuning process utilized a cutoff_len of 4096, indicating its capacity to handle moderately long contexts.

Use Cases

This model is suitable for applications requiring instruction-following capabilities, especially where Korean language understanding and generation are beneficial due to its training data. Its benchmark performance suggests it can be applied to tasks involving common sense reasoning, general knowledge, and truthful question answering.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p