The dddsaty/SOLAR-Instruct-ko-Adapter-Attach is a 10.7 billion parameter causal language model, built upon the Upstage SOLAR-10.7B-Instruct-v1.0 base model. This model integrates a DPO-applied adapter, specifically fine-tuned for Korean language processing using the beomi/OPEN-SOLAR-KO-10.7B adapter and the maywell/ko_Ultrafeedback_binarized corpus. It is optimized for instruction-following tasks in Korean, demonstrating strong performance across various benchmarks including ARC, HellaSwag, MMLU, and TruthfulQA.
Loading preview...
dddsaty/SOLAR-Instruct-ko-Adapter-Attach: Korean-Optimized Instruction Model
This model, developed by dddsaty, is a 10.7 billion parameter instruction-tuned language model derived from the Upstage SOLAR-10.7B-Instruct-v1.0 base. Its key differentiator is the integration of a DPO (Direct Preference Optimization) applied adapter, specifically designed to enhance its performance and instruction-following capabilities in the Korean language.
Key Capabilities & Features
- Korean Language Specialization: Fine-tuned using the
beomi/OPEN-SOLAR-KO-10.7Badapter and themaywell/ko_Ultrafeedback_binarizedcorpus, making it highly proficient in understanding and generating Korean text. - Instruction Following: Benefits from DPO application, improving its ability to adhere to user instructions and generate relevant responses.
- Solid Benchmark Performance: Achieves a competitive average score of 74.11 across various benchmarks, including:
- ARC: 71.08
- HellaSwag: 88.2
- MMLU: 66.09
- TruthfulQA: 71.51
- Winogrande: 83.5
- GSM8K: 64.29
- Context Length: Supports a context length of 4096 tokens.
Good For
- Applications requiring high-quality, instruction-tuned Korean language generation.
- Developers looking for a robust 10.7B parameter model with strong performance in Korean NLP tasks.
- Research and development in Korean-centric large language models.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.