haqishen/h2o-Llama-3-8B-Japanese-Instruct
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Oct 5, 2024License:llama3Architecture:Transformer0.0K Warm
The haqishen/h2o-Llama-3-8B-Japanese-Instruct is an 8 billion parameter Llama 3 instruction-tuned model developed by Qishen Ha. It is specifically fine-tuned on a Japanese conversation dataset (japanese_hh-rlhf-49k) using the h2o-llmstudio framework. This model excels at generating Japanese conversational responses, leveraging a maximum context length of 8192 tokens. Its primary strength lies in its specialized Japanese language capabilities for instruction-following tasks.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
–
top_p
–
top_k
–
frequency_penalty
–
presence_penalty
–
repetition_penalty
–
min_p
–