hyunseoki/ko-ref-llama2-13b
The hyunseoki/ko-ref-llama2-13b is a 13 billion parameter auto-regressive language model developed by HyunseokLee and TaeyoungKim (kaist alinlab, omnious.ai). Based on the LLaMA2 transformer architecture with a 4096 token context length, this model is specifically trained on an open Korean corpus. It is designed to learn and generate text in Korean, making it suitable for Korean language processing tasks.
Loading preview...
Model Overview
The hyunseoki/ko-ref-llama2-13b is a 13 billion parameter auto-regressive language model developed by HyunseokLee and TaeyoungKim from kaist alinlab and omnious.ai. It is built upon the robust LLaMA2 transformer architecture, providing a solid foundation for language understanding and generation. The model processes text inputs and generates text outputs.
Key Characteristics
- Architecture: Based on the LLaMA2 transformer architecture.
- Parameter Count: 13 billion parameters.
- Context Length: Supports a context window of 4096 tokens.
- Language Focus: Specifically trained on an open Korean dataset.
Primary Use Case
This model is primarily designed for tasks requiring proficiency in the Korean language. Its training objective was to learn the Korean corpus, making it well-suited for applications such as:
- Korean text generation.
- Korean language understanding.
- Any application where strong Korean language capabilities are essential.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.