AXCXEPT/Llama-3.1-8B-EZO-1.1-it
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Jul 28, 2024License:llama3.1Architecture:Transformer0.0K Warm
AXCXEPT/Llama-3.1-8B-EZO-1.1-it is an 8 billion parameter instruction-tuned causal language model developed by AXCXEPT, based on Meta AI's Llama 3.1 architecture. This model has been fine-tuned to significantly enhance its performance on Japanese language tasks. It leverages a 32K context window and an innovative training approach using high-quality Japanese Wikipedia and FineWeb data. The primary use case for this model is generating high-quality responses in Japanese across various contexts.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p