UnicomLLM/Unichat-llama3-Chinese-8B

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Apr 19, 2024License:apache-2.0Architecture:Transformer0.1K Open Weights Warm

UnicomLLM/Unichat-llama3-Chinese-8B is an 8 billion parameter Llama 3-based instruction-tuned model developed by China Unicom AI Innovation Center. It is specifically fine-tuned with high-quality Chinese instruction data to achieve high-quality Chinese question-answering capabilities, maintaining an 8192-token context length. This model excels in general Chinese conversational tasks and problem-solving.

Loading preview...

Unichat-llama3-Chinese-8B Overview

UnicomLLM's Unichat-llama3-Chinese-8B is the first Llama 3-based Chinese instruction-tuned model released by China Unicom AI Innovation Center. Built upon Meta's Llama 3-8B, this model undergoes full-parameter fine-tuning using carefully screened, high-quality Chinese instruction data across various domains. Its primary goal is to deliver superior Chinese question-answering performance while retaining the native 8192-token context length of the base Llama 3 model.

Key Capabilities

  • High-Quality Chinese Q&A: Achieves strong performance in Chinese conversational tasks due to extensive fine-tuning on curated Chinese datasets.
  • Llama 3 Foundation: Leverages the robust architecture and pre-training of Meta's Llama 3-8B.
  • General Purpose: Demonstrates proficiency in diverse areas, including historical facts, mathematical problem-solving, and general knowledge.
  • Safety Features: Includes mechanisms to refuse harmful or illegal requests, as shown in the example of manufacturing explosives.

Good For

  • Applications requiring accurate and fluent Chinese language understanding and generation.
  • Chatbots and conversational AI systems targeting Chinese-speaking users.
  • Tasks involving general knowledge, logical reasoning, and instruction following in Chinese.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p