werty1248/Llama-3-Ko-8B-OpenOrca
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Jun 20, 2024License:llama3Architecture:Transformer Warm
werty1248/Llama-3-Ko-8B-OpenOrca is an 8 billion parameter Llama 3-based causal language model fine-tuned for Korean language tasks. It was trained using LoRA-8bit on the kyujinpy/OpenOrca-KO dataset, building upon the beomi/Llama-3-Open-Ko-8B model. This model demonstrates strong performance on Korean language understanding benchmarks, making it suitable for applications requiring robust Korean natural language processing.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p