rinna/youri-7b-instruction
The rinna/youri-7b-instruction model is a 7 billion parameter instruction-tuned causal language model developed by rinna, based on the Llama2 architecture. It is specifically fine-tuned using a diverse set of Japanese and English instruction datasets, including Alpaca-formatted data. This model excels at following instructions and is particularly well-suited for tasks requiring understanding and generation in Japanese contexts, leveraging its specialized training data.
Loading preview...
Overview
rinna/youri-7b-instruction is a 7 billion parameter instruction-tuned language model developed by rinna. It is built upon the Llama2 architecture and is an instruction-tuned version of the base rinna/youri-7b model.
Key Capabilities & Features
- Instruction Following: Optimized to follow instructions, adopting the Alpaca input format for diverse task execution.
- Multilingual Fine-tuning: Fine-tuned on a curated subset of datasets, including:
- Databricks Dolly (English and Japanese translations)
- FLAN Instruction Tuning data (English and Japanese translations)
- Izumi lab LLM Japanese dataset (specific sections like
alt,aozora-txt,CourseraParallel,ParaNatCom,Tab-delimited_Bilingual_Sentence_Pairs,tanaka-corpus,wikinews,wordnet,yasashi-japanese).
- Japanese Language Proficiency: The inclusion of extensive Japanese datasets in its fine-tuning process suggests strong performance in Japanese language understanding and generation tasks.
Use Cases
This model is particularly suitable for applications requiring instruction-based text generation and understanding, especially within Japanese language contexts. Its fine-tuning on diverse instruction datasets makes it adaptable to various prompts and tasks.