Rumiii/Med-Qwen2.5-0.5B-it-Genesis
TEXT GENERATIONConcurrency Cost:1Model Size:0.5BQuant:BF16Ctx Length:32kPublished:Mar 7, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Rumiii/Med-Qwen2.5-0.5B-it-Genesis is a 0.5 billion parameter instruction-tuned medical reasoning model, fine-tuned from Qwen2.5-0.5B-Instruct. It specializes in structured clinical differential diagnosis, medical question answering, and symptom-based reasoning. This compact model is optimized for local deployment in resource-constrained or privacy-sensitive environments, having been trained on over 731,000 clinical reasoning samples. Its primary strength lies in providing lightweight medical AI capabilities for research and educational tools.

Loading preview...

Med-Qwen2.5-0.5B-it-Genesis: A Compact Medical Reasoning Model

Med-Qwen2.5-0.5B-it-Genesis, developed by Rumiii, is a 0.5 billion parameter instruction-tuned model built upon the Qwen2.5-0.5B-Instruct architecture. It is a continued fine-tune of Med-Qwen2.5-0.5B-MedReason-v2, with an additional 506,150 clinical reasoning samples generated by a 120B mixture-of-experts model. This brings its total exposure to clinical reasoning pairs across its training lineage to 731,329.

Key Capabilities & Features

  • Specialized Medical Reasoning: Designed for structured clinical differential diagnosis, medical question answering, and symptom-based reasoning.
  • Compact Size: At 0.5B parameters, it offers a lightweight solution for medical AI applications.
  • Extensive Medical Training: Benefited from a large dataset of 506,150 long-form clinical reasoning pairs covering diverse medical specialties.
  • Apache 2.0 License: Provides flexibility for commercial and research use.
  • Context Length: Supports a context window of 2048 tokens.

Ideal Use Cases

  • Clinical Decision Support Research: A valuable tool for exploring and developing AI-driven medical assistance.
  • Medical Education & Training: Suitable for creating interactive learning tools and simulations.
  • Offline & Local Deployment: Optimized for environments requiring on-device processing due to resource constraints or privacy concerns (e.g., HIPAA-friendly settings).
  • Lightweight AI Backbone: Serves as an efficient reasoning component for developers building medical AI applications.

Limitations

It's important to note that due to its 0.5B parameter size, the model has a hard knowledge ceiling and may occasionally hallucinate. It is not intended for direct clinical decision-making and outputs must always be verified by qualified medical professionals.