12thD/ko-Llama-3-8B-sft-v0.1
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kLicense:meta-llama-3-community-license-agreementArchitecture:Transformer Warm
The 12thD/ko-Llama-3-8B-sft-v0.1 is an 8 billion parameter instruction-tuned language model based on the Llama 3 architecture. This model is shared by 12thD and is designed for general language understanding and generation tasks. Its primary strength lies in its ability to follow instructions effectively, making it suitable for a wide range of conversational and text-based applications. The model has a context length of 8192 tokens, allowing for processing of moderately long inputs.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p