YDXX/G-Health-14B-instruct

TEXT GENERATIONConcurrency Cost:1Model Size:14BQuant:FP8Ctx Length:32kPublished:Mar 8, 2026Architecture:Transformer0.0K Cold

YDXX/G-Health-14B-instruct is a 14 billion parameter instruction-tuned language model built on Qwen3, specifically designed for medical and preventive health applications. It excels at interpreting health checkup reports, providing structured outputs, and offering personalized explanations. The model features a 32768 token context length and is aligned with extensive medical dialogues for robust communication quality.

Loading preview...

G-Health-14B-instruct: Medical & Preventive Health LLM

G-Health-14B-instruct is a 14 billion parameter large language model from YDXX, specialized for medical and preventive health use cases. Built upon the Qwen3 architecture, this model undergoes a two-stage alignment process to enhance its medical domain understanding and communication quality. It is particularly distinguished by its fine-tuning for interpreting health checkup reports, providing structured and actionable insights.

Key Capabilities

  • Medical Domain Alignment: Initial alignment with 2.8 million supervised fine-tuning (SFT) dialogue samples and 1.6 million Direct Preference Optimization (DPO) preference samples, starting from Qwen3.
  • Health Checkup Report Specialization: Further fine-tuned to interpret lab values, imaging conclusions, and signal cautious risks under uncertainty.
  • Personalized Explanations: Enhanced awareness for tailoring explanations and recommendations to individual contexts based on health reports.
  • Structured Outputs: Designed to produce structured, report-to-action outputs for health checkup interpretations.

Good For

  • Interpreting comprehensive health checkup reports.
  • Generating personalized health recommendations and explanations.
  • Applications requiring robust medical dialogue and communication.
  • Use cases demanding cautious risk signaling in health assessments.