cyberagent/Llama-3.1-70B-Japanese-Instruct-2407
TEXT GENERATIONConcurrency Cost:4Model Size:70BQuant:FP8Ctx Length:32kPublished:Jul 26, 2024License:llama3.1Architecture:Transformer0.1K Warm

cyberagent/Llama-3.1-70B-Japanese-Instruct-2407 is a 70 billion parameter instruction-tuned causal language model developed by CyberAgent, continually pre-trained from Meta's Llama-3.1-70B-Instruct. This model is specifically optimized for high-quality Japanese language understanding and generation. It leverages the robust Llama 3.1 architecture to provide advanced performance for Japanese-centric natural language processing tasks.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p