beomi/KoAlpaca-llama-1-7b
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Mar 17, 2023License:apache-2.0Architecture:Transformer0.0K Open Weights Cold
KoAlpaca-llama-1-7b is a 7 billion parameter Korean Alpaca model developed by Beomi, fine-tuned from the LLAMA architecture. This model specializes in Korean language understanding and generation, building upon the Stanford Alpaca instruction-following methodology. It is designed for various natural language processing tasks in Korean, offering a robust foundation for applications requiring localized linguistic capabilities. With a context length of 4096 tokens, it supports processing moderately long Korean texts.
Loading preview...