FittenTech/openllama-chinese-english-13b-600bt

TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kLicense:apache-2.0Architecture:Transformer Open Weights Cold

FittenTech/openllama-chinese-english-13b-600bt is a 13 billion parameter OpenLLaMA-based language model developed by FittenTech, specifically designed for strong performance in both Chinese and English. It features a 4096-token context window and is notable for its extensive training on 600 billion tokens of mixed Chinese and English data. This model excels in bilingual applications requiring robust understanding and generation in both languages.

Loading preview...

FittenTech/openllama-chinese-english-13b-600bt Overview

This model is a 13 billion parameter language model built upon the OpenLLaMA architecture, developed by FittenTech. Its primary distinction lies in its bilingual proficiency, having been extensively trained on a massive dataset of 600 billion tokens comprising both Chinese and English content. This dual-language focus aims to provide strong performance across both linguistic contexts, making it suitable for applications requiring seamless switching or integration between Chinese and English.

Key Capabilities

  • Bilingual Performance: Optimized for robust understanding and generation in both Chinese and English.
  • OpenLLaMA Base: Leverages the established OpenLLaMA architecture for its foundational capabilities.
  • Large Context Window: Supports a 4096-token context length, allowing for processing longer inputs and generating more coherent responses.
  • Extensive Training: Benefits from training on 600 billion tokens, contributing to its general language understanding and generation abilities.

Good For

  • Applications requiring high-quality text generation and comprehension in both Chinese and English.
  • Use cases involving cross-lingual communication or content creation.
  • Developers seeking a powerful, open-source bilingual model with a substantial parameter count and context window.