vitus9988/Llama-3.2-1B-Instruct-Ko-SFT
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kLicense:apache-2.0Architecture:Transformer Open Weights Warm
vitus9988/Llama-3.2-1B-Instruct-Ko-SFT is a 1 billion parameter instruction-tuned causal language model based on the Llama 3.2 architecture. This model is specifically fine-tuned for Korean language tasks, making it suitable for applications requiring instruction-following capabilities in Korean. It features a substantial 32,768 token context length, enhancing its ability to process and generate longer Korean texts.
Loading preview...