haqishen/Llama-3-8B-Japanese-Instruct
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Apr 23, 2024License:llama3Architecture:Transformer0.0K Warm

haqishen/Llama-3-8B-Japanese-Instruct is an 8 billion parameter Llama-3-8B-Instruct model fine-tuned by Qishen Ha specifically for Japanese conversational tasks. Utilizing the fujiki/japanese_hh-rlhf-49k dataset and LLaMA-Factory, this model excels at generating human-like responses in Japanese, supporting a context length of 8192 tokens. It is optimized for Japanese language understanding and generation, making it suitable for various Japanese-centric NLP applications.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p