yentinglin/Llama-3-Taiwan-70B-Instruct
TEXT GENERATIONConcurrency Cost:4Model Size:70BQuant:FP8Ctx Length:8kPublished:May 31, 2024License:llama3Architecture:Transformer0.1K Warm

yentinglin/Llama-3-Taiwan-70B-Instruct is a 70 billion parameter instruction-tuned language model developed by yentinglin, based on the Llama-3 architecture. It is fine-tuned on a large corpus of Traditional Mandarin and English data, excelling in language understanding, generation, reasoning, and multi-turn dialogue for these languages. The model features an 8K context length and demonstrates strong performance on various Traditional Mandarin NLP benchmarks, including legal, manufacturing, medical, and electronics domains.

Loading preview...