rinna/llama-3-youko-8b-instruct
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Jul 21, 2024License:llama3Architecture:Transformer0.0K Warm

The rinna/llama-3-youko-8b-instruct is an 8 billion parameter instruction-tuned causal language model developed by rinna, built upon the Llama 3 architecture. It was fine-tuned using supervised fine-tuning (SFT), Chat Vector, and direct preference optimization (DPO) with a diverse dataset including Japanese and English subsets. This model is designed for instruction-following tasks, adopting the Llama-3 chat format and leveraging a 8192 token context length.

Loading preview...