shisa-ai/shisa-v1-llama3-8b
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kLicense:llama3Architecture:Transformer0.0K Warm
shisa-ai/shisa-v1-llama3-8b is an 8 billion parameter Llama 3-based instruction-tuned causal language model developed by shisa-ai. Fine-tuned from Meta-Llama-3-8B-Instruct, this model demonstrates strong performance on Japanese language benchmarks, achieving an average score of 6.59 across ELYZA100, JA MT-Bench, Rakuda, and Tengu-Bench. It is optimized for general-purpose Japanese language tasks, offering a competitive option in its size class.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p