Overview
Model Overview
Developed by Yunsung Ji (Saxo), a data scientist at Linkbricks Horizon-AI, this model is an 8 billion parameter Korean language model. It is a fine-tuned version of the Meta-Llama-3.1-8B-Instruct base model, utilizing SFT (Supervised Fine-Tuning) and DPO (Direct Preference Optimization) techniques.
Key Capabilities
- Multilingual Processing: Enhanced with Korean-Chinese-English-Japanese cross-training data, allowing for improved understanding and generation across these languages.
- Logical Reasoning: Specifically trained with logical data to address and solve complex Korean logical problems effectively.
- Specialized Analysis: Strengthened for high-level analysis of customer reviews and social media postings.
- Coding Proficiency: Demonstrates enhanced capabilities in coding tasks.
- Extended Context Window: Supports a 32,768-token context window, enabling processing of longer inputs.
- Tool Calling: Includes support for tool calling functionalities.
Intended Use Cases
This model is particularly well-suited for applications requiring:
- Advanced Korean language understanding and generation.
- Solving complex logical problems in Korean.
- Detailed sentiment and content analysis from customer feedback and social media.
- Code generation and related programming tasks.
- Multilingual applications involving Korean, Chinese, English, and Japanese.