bigdefence/Llama-3.1-8B-Ko-bigdefence
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Aug 17, 2024License:apache-2.0Architecture:Transformer Open Weights Warm

bigdefence/Llama-3.1-8B-Ko-bigdefence is an 8 billion parameter Llama-3.1 model developed by Bigdefence, fine-tuned for Korean language tasks. This model leverages the Llama-3.1 architecture and a 32768 token context length, specifically optimized using the KoCommercial-Dataset. It is designed for applications requiring strong performance in Korean language generation and understanding.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p