ganchengguang/Yoko-7B-Japanese-v0
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kLicense:mitArchitecture:Transformer Open Weights Cold
ganchengguang/Yoko-7B-Japanese-v0 is a 7 billion parameter LLaMA2-based causal language model fine-tuned by ganchengguang using QLoRA. It was trained on a subset of the Guanaco dataset, specifically 49,000 chat samples, and demonstrates improved performance in Chinese and Japanese language tasks. This model is suitable for chat-based applications requiring enhanced multilingual capabilities, particularly in East Asian languages.
Loading preview...