Tianye88/Qwen2.5-1.5B-Instruct-Medical-cpt-sft-v2-dpo-v2

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Jan 5, 2026License:mitArchitecture:Transformer Open Weights Warm

Tianye88/Qwen2.5-1.5B-Instruct-Medical-cpt-sft-v2-dpo-v2 is a 1.5 billion parameter instruction-tuned language model developed by Tianye88, based on the Qwen2.5 architecture. This model is specifically fine-tuned for medical applications, leveraging a combination of supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) on medical datasets. With a substantial context length of 131,072 tokens, it is designed to process and generate medically relevant text, making it suitable for tasks requiring deep understanding and generation within the medical domain.

Loading preview...

Overview

Tianye88/Qwen2.5-1.5B-Instruct-Medical-cpt-sft-v2-dpo-v2 is a specialized 1.5 billion parameter language model built upon the Qwen2.5 architecture. Developed by Tianye88, this model is distinguished by its intensive fine-tuning process, which includes supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) specifically on medical corpora. This training methodology aims to enhance its performance and alignment for medical-related tasks.

Key Capabilities

  • Medical Domain Specialization: Optimized for understanding and generating text within the medical field.
  • Instruction Following: Designed to respond effectively to instructions, making it suitable for interactive applications.
  • Extended Context Window: Features a context length of 131,072 tokens, allowing it to process lengthy medical documents or conversations.

Good For

  • Medical Information Retrieval: Assisting in extracting relevant information from clinical notes, research papers, or patient records.
  • Medical Question Answering: Providing informed responses to medical queries based on its specialized training.
  • Clinical Text Generation: Generating summaries, reports, or other textual content relevant to healthcare scenarios.