ifuseok/sft-solar-10.7b-v1
Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:10.7BQuant:FP8Ctx Length:4kArchitecture:Transformer Warm

The ifuseok/sft-solar-10.7b-v1 model is a 10.7 billion parameter instruction-tuned language model developed by ifuseok, built upon the Upstage SOLAR-10.7B-Instruct-v1.0 base architecture. It features a 4096-token context length and is specifically fine-tuned on a collection of Korean datasets, including nlpai-lab/databricks-dolly-15k-ko and kyujinpy/KOR-OpenOrca-Platypus-v3. This model is optimized for understanding and generating responses in Korean, making it suitable for Korean-centric conversational AI and instruction-following tasks.

Loading preview...

Model Overview

The ifuseok/sft-solar-10.7b-v1 is a 10.7 billion parameter instruction-tuned language model. It is built upon the upstage/SOLAR-10.7B-Instruct-v1.0 base model, enhancing its capabilities through supervised fine-tuning.

Key Characteristics

Primary Use Case

This model is specifically designed and optimized for Korean language understanding and generation. Its fine-tuning on multiple Korean instruction and conversational datasets makes it particularly well-suited for applications requiring high-quality, instruction-following responses in Korean. Developers can leverage this model for tasks such as Korean chatbots, content generation, and question-answering systems.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p