FreedomIntelligence/ShizhenGPT-7B-LLM

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Aug 21, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

ShizhenGPT-7B-LLM by FreedomIntelligence is a 7.6 billion parameter language model, derived from the ShizhenGPT-7B-Omni multimodal model. It specializes in Traditional Chinese Medicine (TCM) and is optimized for text-based interactions within this domain. This model is recommended for use cases requiring strong TCM expertise and text-only capabilities, leveraging an architecture aligned with Qwen2.5.

Loading preview...

ShizhenGPT-7B-LLM: A Specialized TCM Language Model

ShizhenGPT-7B-LLM, developed by FreedomIntelligence, is a 7.6 billion parameter language model specifically designed for Traditional Chinese Medicine (TCM). It is a text-only variant of the broader ShizhenGPT multimodal family, which includes models capable of handling various diagnostic modalities like looking, listening/smelling, questioning, and pulse-taking.

Key Capabilities

  • TCM Expertise: Possesses strong knowledge and understanding of Traditional Chinese Medicine concepts and practices.
  • Text-Based Interaction: Optimized for text-only applications, making it suitable for querying and generating information related to TCM.
  • Qwen2.5 Alignment: Its architecture is aligned with Qwen2.5, facilitating easier adaptation and deployment in environments supporting Qwen2.5 models.
  • Scalable Family: Part of a larger family of ShizhenGPT models, including multimodal (VL, Omni) and larger parameter (32B) versions, offering flexibility for different use cases.

Good For

  • Applications requiring deep textual understanding and generation within the Traditional Chinese Medicine domain.
  • Developers seeking a specialized LLM for TCM-related chatbots, information retrieval, or educational tools.
  • Use cases where only text input and output are necessary, providing a streamlined alternative to the full multimodal ShizhenGPT-7B-Omni.

For more details, refer to the ShizhenGPT GitHub repository and the associated research paper.