alfredplpl/Llama-3-8B-Instruct-Ja
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Apr 22, 2024License:llama3Architecture:Transformer0.0K Warm

alfredplpl/Llama-3-8B-Instruct-Ja is an 8 billion parameter instruction-tuned causal language model, based on Meta's Llama 3 architecture, specifically optimized for Japanese language processing. This model enhances the original Llama 3's capabilities for Japanese by undergoing further instruction tuning on Japanese datasets. It is designed for various Japanese natural language understanding and generation tasks, offering a robust solution for applications requiring strong Japanese linguistic performance.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p