The ifuseok/sft-solar-10.7b-v1.1 model is a 10.7 billion parameter language model fine-tuned from the Upstage SOLAR-10.7B-Instruct-v1.0 base model. It is specifically trained on a diverse set of Korean instruction datasets, including nlpai-lab/databricks-dolly-15k-ko and kyujinpy/KOR-OpenOrca-Platypus-v3. This model excels at generating text based on Korean instructions, making it suitable for various Korean natural language processing tasks.
No reviews yet. Be the first to review!