Llama3-Chinese is an 8 billion parameter language model developed by Zhichen Zhang, Xin LU, and Long Chen, based on Meta-Llama-3-8B. It is fine-tuned using DORA and LORA+ methods on 500k high-quality Chinese multi-turn SFT data and 100k English multi-turn SFT data. This model is specifically optimized for enhanced performance in Chinese language understanding and generation tasks, making it suitable for applications requiring strong bilingual capabilities.
No reviews yet. Be the first to review!