davidkim205/komt-Llama-2-7b-chat-hf
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kLicense:apache-2.0Architecture:Transformer0.0K Open Weights Cold

The komt-Llama-2-7b-chat-hf is a 7 billion parameter auto-regressive language model developed by davidkim (changyeon kim), based on the Llama 2 architecture with a 4096-token context length. It is specifically fine-tuned using a multi-task instruction technique to enhance performance in Korean language tasks. This model excels in Korean Semantic Textual Similarity, outperforming other Llama-2-7b variants.

Loading preview...