yentinglin/Llama-3-Taiwan-8B-Instruct
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Jun 4, 2024License:llama3Architecture:Transformer0.1K Warm

yentinglin/Llama-3-Taiwan-8B-Instruct is an 8 billion parameter instruction-tuned language model based on the Llama-3 architecture, developed by yentinglin. It is specifically fine-tuned on a large corpus of Traditional Mandarin and English data, offering strong capabilities in language understanding, generation, reasoning, and multi-turn dialogue for these languages. With an 8K context length, this model excels in tasks requiring deep comprehension and generation in Traditional Mandarin, including specialized domains like legal, manufacturing, medical, and electronics.

Loading preview...