elyza/ELYZA-japanese-Llama-2-13b-instruct

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kPublished:Dec 25, 2023License:llama2Architecture:Transformer0.0K Open Weights Warm

ELYZA-japanese-Llama-2-13b-instruct is a 13 billion parameter instruction-tuned language model developed by ELYZA, based on the Llama 2 architecture. It has undergone additional pre-training to enhance its Japanese language capabilities. This model is specifically designed to function as a highly capable Japanese assistant, excelling in tasks requiring nuanced understanding and generation of Japanese text.

Loading preview...

Overview

ELYZA-japanese-Llama-2-13b-instruct is a 13 billion parameter instruction-tuned model developed by ELYZA. It is built upon the Llama 2 foundation model and has been specifically enhanced through additional pre-training to significantly improve its proficiency in the Japanese language. This model aims to provide robust performance for various Japanese natural language processing tasks.

Key Capabilities

  • Enhanced Japanese Language Understanding: The model has undergone specific pre-training to extend its Japanese language capabilities beyond the base Llama 2 model.
  • Instruction Following: As an instruct model, it is fine-tuned to follow user instructions effectively, making it suitable for conversational agents and task-oriented applications.
  • Japanese Assistant Role: The default system prompt configures it as an "honest and excellent Japanese assistant," indicating its intended use for general-purpose assistance in Japanese.

Good for

  • Japanese Text Generation: Creating coherent and contextually relevant text in Japanese.
  • Conversational AI in Japanese: Developing chatbots or virtual assistants that interact in Japanese.
  • Japanese Language Applications: Any application requiring a strong understanding and generation of Japanese, where the base Llama 2 might fall short in specific Japanese nuances.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p