Model Overview
omi-health/sum-small is a 4 billion parameter language model developed by Omi Health, fine-tuned from the Microsoft/Phi-3-mini-4k-instruct architecture. Its primary purpose is to generate SOAP (Subjective, Objective, Assessment, Plan) summaries from medical dialogues.
Key Capabilities & Performance
This model excels at transforming medical conversations into structured SOAP notes. It was trained on Omi Health's synthetic dataset of 10,000 medical dialogues and corresponding SOAP summaries. Evaluation using ROUGE-1 metrics shows that Sum Small achieves a score of 70, outperforming:
- GPT-4 Turbo (69)
- LLaMA3 8B Instruct (59)
- GPT-3.5 (54)
- Its base model, Phi-3 3B mini 4k instruct (55)
Intended Use & Limitations
Sum Small is intended for research and development in AI-powered medical documentation. While it demonstrates strong performance, it is crucial to note that the training data is entirely synthetic. Therefore, it is not ready for direct clinical use and requires significant further validation, testing, and integration with safety guardrails before deployment in a medical setting. The model is released under the MIT License, allowing broad commercial and non-commercial use.