aixsatoshi/Meta-Llama-3.1-8B-Instruct-plus-Swallow is an 8 billion parameter Llama-3.1 derived model, specifically enhanced for Japanese language fluency. It integrates the Japanese continuous pre-training improvements from the Swallow-8B model into the Meta Llama-3.1-8B-Instruct base. This model excels in Japanese language tasks, leveraging its 32768 token context length for nuanced understanding and generation.
No reviews yet. Be the first to review!