torchtorchkimtorch/Llama-3.2-Korean-GGACHI-1B-Instruct-v1
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Sep 26, 2024Architecture:Transformer0.0K Warm
The Llama-3.2-Korean-GGACHI-1B-Instruct-v1 is a 1 billion parameter instruction-tuned language model developed by torchtorchkimtorch, based on the Llama-3.2-1B-Instruct architecture. Optimized for Korean language tasks, it was fine-tuned using over 230,000 high-quality Korean datasets. This model excels in Korean-specific benchmarks like KOBEST, demonstrating improved accuracy over its base model for tasks such as COPA, HellaSwag, and Sentineg. It is primarily designed for applications requiring strong performance in Korean natural language understanding and generation.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
–
top_p
–
top_k
–
frequency_penalty
–
presence_penalty
–
repetition_penalty
–
min_p
–