The ifuseok/sft-solar-10.7b-v1 model is a 10.7 billion parameter instruction-tuned language model developed by ifuseok, built upon the Upstage SOLAR-10.7B-Instruct-v1.0 base architecture. It features a 4096-token context length and is specifically fine-tuned on a collection of Korean datasets, including nlpai-lab/databricks-dolly-15k-ko and kyujinpy/KOR-OpenOrca-Platypus-v3. This model is optimized for understanding and generating responses in Korean, making it suitable for Korean-centric conversational AI and instruction-following tasks.
Loading preview...
Model Overview
The ifuseok/sft-solar-10.7b-v1 is a 10.7 billion parameter instruction-tuned language model. It is built upon the upstage/SOLAR-10.7B-Instruct-v1.0 base model, enhancing its capabilities through supervised fine-tuning.
Key Characteristics
- Base Model: Utilizes the robust SOLAR-10.7B-Instruct-v1.0 architecture.
- Parameter Count: Features 10.7 billion parameters, offering a balance between performance and computational efficiency.
- Context Length: Supports a context window of 4096 tokens.
- Training Data: Fine-tuned on a diverse set of Korean-specific datasets, including:
- nlpai-lab/databricks-dolly-15k-ko
- kyujinpy/KOR-OpenOrca-Platypus-v3
- heegyu/open-korean-instructions
- KETI-AIR/kor_boolq
- Partial AIhub Korean-English translation data.
Primary Use Case
This model is specifically designed and optimized for Korean language understanding and generation. Its fine-tuning on multiple Korean instruction and conversational datasets makes it particularly well-suited for applications requiring high-quality, instruction-following responses in Korean. Developers can leverage this model for tasks such as Korean chatbots, content generation, and question-answering systems.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.