Saxo/Linkbricks-Horizon-AI-Korean-llama3.1-sft-dpo-70B

Warm
Public
70B
FP8
32768
License: apache-2.0
Hugging Face
Overview

Linkbricks Horizon-AI Korean Llama 3.1 (70B)

This model, developed by Yunsung Ji (Saxo), a data scientist at Linkbricks Horizon-AI, is a 70 billion parameter Korean language model. It is fine-tuned from the NousResearch/Meta-Llama-3.1-70B-Instruct base model using Supervised Fine-Tuning (SFT) and Direct Preference Optimization (DPO) on KT-CLOUD's H100-80G GPUs.

Key Capabilities

  • Multilingual Enhancement: Trained with Korean-Chinese-English-Japanese cross-training data, improving cross-lingual understanding.
  • Logical Reasoning: Enhanced to handle complex Korean logical problems through specialized logical data training.
  • Domain-Specific Strengths: Particularly strong in high-level analysis of customer reviews and social media postings, as well as coding tasks.
  • Tool Calling: Supports tool calling functionalities.
  • Context Window: Utilizes a 32768-token context window.
  • Tokenizer: Retains the base model's tokenizer without word expansion.

Good For

  • Applications requiring advanced Korean language understanding and generation.
  • Tasks involving multilingual text analysis, especially across Korean, Chinese, English, and Japanese.
  • Use cases demanding logical problem-solving in Korean.
  • Analyzing customer feedback, social media trends, and coding assistance.