QLU-NLP/BianCang-Qwen2.5-7B
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Nov 16, 2024Architecture:Transformer0.0K Cold

BianCang-Qwen2.5-7B is a 7.6 billion parameter instruction-tuned large language model developed by QLU-NLP, based on the Qwen2.5 architecture. It is specifically designed and optimized for Traditional Chinese Medicine (TCM) applications, excelling in tasks like TCM disease diagnosis and syndrome differentiation. The model demonstrates strong performance in medical license examinations, making it suitable for assisting medical professionals and patients in TCM-related inquiries.

Loading preview...