guangyangnlp/Qwen3-1.7B-SFT-medical-2e-5
TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Feb 22, 2026License:otherArchitecture:Transformer Warm
The guangyangnlp/Qwen3-1.7B-SFT-medical-2e-5 model is a fine-tuned version of the Qwen3-1.7B architecture, specifically optimized for medical applications. This 1.7 billion parameter model was trained on the medical_o1_train dataset, demonstrating a validation loss of 1.4089. It is designed to enhance performance in medical-related natural language processing tasks, leveraging its specialized fine-tuning for domain-specific understanding.
Loading preview...