stvlynn/Gemma-2-2b-Chinese-it

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:2.6BQuant:BF16Ctx Length:8kPublished:Aug 2, 2024License:gpl-3.0Architecture:Transformer0.0K Open Weights Warm

stvlynn/Gemma-2-2b-Chinese-it is a 2.6 billion parameter instruction-tuned language model developed by stvlynn, fine-tuned from Google's Gemma-2-2b-it. This model specializes in Chinese language processing, leveraging approximately 6.4k rows of the ruozhiba dataset for its fine-tuning. It is designed for applications requiring a compact yet capable model for Chinese conversational tasks.

Loading preview...

Overview

stvlynn/Gemma-2-2b-Chinese-it is a 2.6 billion parameter instruction-tuned language model. It is a specialized version of Google's Gemma-2-2b-it model, fine-tuned by stvlynn to enhance its capabilities in Chinese language understanding and generation.

Key Capabilities

  • Chinese Language Specialization: The model has been fine-tuned using approximately 6.4k rows from the ruozhiba dataset, focusing on Chinese conversational data.
  • Instruction Following: Inherits instruction-following capabilities from its base model, Gemma-2-2b-it.
  • Compact Size: With 2.6 billion parameters, it offers a balance between performance and computational efficiency for Chinese NLP tasks.

Good For

  • Applications requiring a smaller, efficient model for Chinese-centric instruction-following tasks.
  • Developers looking for a Gemma-2 variant optimized for specific Chinese language nuances.