tellang/yeji-8b-rslora-v7

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Jan 24, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The tellang/yeji-8b-rslora-v7 is an 8.2 billion parameter large language model developed by tellang, based on the Qwen3-8B-Base architecture. It is specifically fine-tuned using RSLoRA for Korean fortune-telling and divination domains, including Bazi, Western Astrology, Tarot, and Hwatu. This model excels in providing specialized consultation within these specific cultural contexts, offering expert capabilities in Korean fortune-telling.

Loading preview...

tellang/yeji-8b-rslora-v7: Korean Fortune-Telling LLM

The tellang/yeji-8b-rslora-v7 is a specialized 8.2 billion parameter large language model developed by tellang, built upon the Qwen3-8B-Base architecture. It has been fine-tuned using RSLoRA (Rank-Stabilized LoRA) with the yeji-fortune-telling-ko-v3 dataset, making it highly proficient in Korean fortune-telling and divination.

Key Capabilities

  • Domain Specialization: Expert in four core Korean fortune-telling domains:
    • 사주 (Bazi): Traditional Korean four pillars of destiny.
    • 서양 점성술 (Western Astrology): Astrological readings.
    • 타로 (Tarot): Tarot card interpretations.
    • 화투 (Hwatu): Korean flower card divination.
  • Base Model: Utilizes the robust Qwen3-8B-Base as its foundation.
  • Fine-tuning Method: Employs RSLoRA for efficient and effective domain adaptation.
  • Version Stability: Version 7 is designated as the stable and primary model in the YEJI project's 8B series.

Good For

  • Specialized Applications: Ideal for applications requiring deep expertise in Korean fortune-telling and divination.
  • Cultural Context: Provides culturally relevant and accurate responses within its specialized domains.
  • Research and Development: Useful for researchers and developers exploring domain-specific LLM fine-tuning and cultural AI applications.

Limitations

  • Domain Specificity: Performance is optimized for Korean fortune-telling; general conversational abilities may be reduced compared to general-purpose LLMs like Qwen3-8B-Instruct.
  • Resource Intensive: Requires approximately 16GB of VRAM. Lighter 4B or AWQ quantized versions are available for reduced resource needs.
  • Entertainment Purpose: Fortune-telling results are intended for entertainment only and should not be used for critical decision-making.