ryusangwon/ko_en_Llama-3.2-1B-Instruct
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Dec 28, 2024Architecture:Transformer Warm

The ryusangwon/ko_en_Llama-3.2-1B-Instruct model is a 1 billion parameter instruction-tuned causal language model, fine-tuned by ryusangwon from the meta-llama/Llama-3.2-1B-Instruct base model. It was trained using the TRL framework. This model is designed for general text generation tasks, leveraging its instruction-following capabilities.

Loading preview...