Tianye88/Qwen2.5-1.5B-Instruct-Medical-cpt-sft-v2-dpo-v2
TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Jan 5, 2026License:mitArchitecture:Transformer Open Weights Warm
Tianye88/Qwen2.5-1.5B-Instruct-Medical-cpt-sft-v2-dpo-v2 is a 1.5 billion parameter instruction-tuned language model developed by Tianye88, based on the Qwen2.5 architecture. This model is specifically fine-tuned for medical applications, leveraging a combination of supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) on medical datasets. With a substantial context length of 131,072 tokens, it is designed to process and generate medically relevant text, making it suitable for tasks requiring deep understanding and generation within the medical domain.
Loading preview...