uukuguy/speechless-orca-platypus-coig-lite-2k-0.6e-13b

TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kPublished:Aug 30, 2023License:cc-by-nc-4.0Architecture:Transformer Open Weights Cold

The uukuguy/speechless-orca-platypus-coig-lite-2k-0.6e-13b is a 13 billion parameter Llama 2-based instruction-tuned language model. It is a merge of OpenOrca-Platypus2-13B, fine-tuned with 10% COIG-PC-LITE and 10% OpenOrca datasets, and 100% Open-Platypus for enhanced Chinese language capabilities. This model aims to combine the reasoning strengths of Platypus with the instruction-following of OpenOrca, specifically adapted for Chinese contexts.

Loading preview...

Model Overview

The uukuguy/speechless-orca-platypus-coig-lite-2k-0.6e-13b is a 13 billion parameter language model built upon the Llama 2 transformer architecture. It is a specialized merge of OpenOrca-Platypus2-13B, which itself combines garage-bAInd/Platypus2-13B and Open-Orca/OpenOrcaxOpenChat-Preview2-13B. The key differentiator for this specific model is its fine-tuning with 10% COIG-PC-LITE and 10% OpenOrca datasets, alongside 100% Open-Platypus, to introduce and enhance Chinese language capabilities.

Key Capabilities

  • Multilingual Adaptation: Specifically fine-tuned to extend the original English-centric OrcaPlatypus model with Chinese language understanding and generation.
  • Instruction Following: Inherits strong instruction-following abilities from its OpenOrca base.
  • Reasoning: Benefits from the STEM and logic-based training of the Platypus component.

Good For

  • Applications requiring a 13B parameter model with strong instruction-following and reasoning in both English and Chinese.
  • Developers looking for a Llama 2-based model optimized for Chinese language tasks, building on established English-language performance.