Dario213/Qwen3-4B-medical-reasoning
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Mar 11, 2026License:apache-2.0Architecture:Transformer Open Weights Warm
Dario213/Qwen3-4B-medical-reasoning is a 4 billion parameter Qwen3 model developed by Dario213, fine-tuned for medical reasoning tasks. This model leverages LoRA adapters and was trained using Unsloth and Huggingface's TRL library for accelerated fine-tuning. It is specifically optimized for complex medical reasoning, utilizing the FreedomIntelligence/medical-o1-reasoning-SFT dataset. The model has a context length of 32768 tokens, making it suitable for processing extensive medical texts.
Loading preview...