davidkim205/Ko-Llama-3-8B-Instruct
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kLicense:llama3Architecture:Transformer0.0K Warm

Ko-Llama-3-8B-Instruct is an 8 billion parameter instruction-tuned causal language model developed by davidkim (changyeon kim), based on Meta-Llama-3-8B-Instruct. This model is specifically researched and fine-tuned using a rejection sampling technique to enhance performance in Korean language tasks. It aims to improve the capabilities of large language models for Korean-specific applications, leveraging its 8192 token context length.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p