tellang/yeji-4b-instruct-v9
The tellang/yeji-4b-instruct-v9 is a 4 billion parameter Korean Large Language Model, fine-tuned using RSLoRA on the Qwen3-4B base model. It is specifically optimized for Korean fortune-telling and divination domains, including Bazi, Western Astrology, Tarot, and Hwatu. This model excels at generating responses related to these specialized topics, making it suitable for applications requiring domain-specific Korean text generation in fortune-telling.
Loading preview...
Overview
The tellang/yeji-4b-instruct-v9 is a specialized Korean Large Language Model (LLM) developed by tellang. It is built upon the Qwen3-4B base model and has undergone extensive fine-tuning using the RSLoRA (Rank-Stabilized LoRA) method. This model represents the ninth iteration in its development, focusing on optimization for specific Korean divination domains.
Key Capabilities
- Domain-Specific Expertise: Highly specialized in four Korean fortune-telling domains: 사주 (Bazi), 서양 점성술 (Western Astrology), 타로 (Tarot), and 화투 (Hwatu).
- Korean Language Focus: Trained exclusively on a Korean dataset,
yeji-fortune-telling-ko-v9, comprising 31,625 samples. - Efficient Fine-tuning: Utilizes RSLoRA for efficient adaptation of the base model to the target domains.
Good For
- Fortune-Telling Applications: Ideal for generating detailed interpretations and responses within the specified divination contexts.
- Korean NLP in Specialized Domains: Useful for developers building applications that require nuanced understanding and generation of Korean text related to Bazi, astrology, tarot, or Hwatu.
- Entertainment and Informational Tools: Can be integrated into platforms providing entertainment-focused divination content.
Limitations
- Domain Specificity: General conversational performance may be reduced compared to the base model due to its highly specialized training.
- Entertainment Use Only: Outputs are for entertainment purposes and should not be used for real-world decision-making.
- Potential for Chinese Mix: Some responses might include Chinese characters due to the nature of the training data, which contains Chinese Bazi terminology.