Preferred-MedLLM-Qwen-72B is a 72.7 billion parameter model developed by Preferred Networks, Inc., fine-tuned from Qwen/Qwen2.5-72B. It has undergone continued pretraining on an original corpus of medical-related text, specializing in medical knowledge. This model achieves superior performance on the Japanese medical licensing examination (IgakuQA) with a context length of 131072 tokens, outperforming models like GPT-4o and Qwen2.5-72B.
No reviews yet. Be the first to review!