yentinglin/Taiwan-LLM-7B-v2.1-base
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kLicense:apache-2.0Architecture:Transformer Open Weights Cold

The yentinglin/Taiwan-LLM-7B-v2.1-base is a 7 billion parameter language model based on the Mistral-7B-v0.1 architecture, developed by Yen-Ting Lin and Yun-Nung Chen in collaboration with Ubitus K.K. It has been continuously pre-trained on 20 billion tokens of traditional Mandarin and instruction fine-tuned on millions of conversations, specifically excluding CommonCrawl data. This model is optimized for processing and generating content in traditional Mandarin, making it suitable for applications requiring culturally aligned language understanding in Taiwan.

Loading preview...