ganchengguang/Yoko_13B_Japanese_QLoRA
TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kLicense:mitArchitecture:Transformer0.0K Open Weights Cold
Yoko_13B_Japanese_QLoRA is a 13 billion parameter language model developed by ganchengguang and contributed to by Yokohama National University Mori Lab. It is a QLoRA fine-tune of Llama-2-13b-chat-hf, specifically optimized for improved performance in Japanese and Chinese. The model was trained using the llm-japanese-dataset and additional chat and non-chat samples, making it suitable for conversational and general text generation tasks in these languages.
Loading preview...