omi-health/sum-small
Omi-Health's Sum Small is a 3 billion parameter language model, fine-tuned from Microsoft/Phi-3-mini-4k-instruct, specifically engineered for generating SOAP summaries from medical dialogues. This model demonstrates competitive performance against larger closed models, achieving a ROUGE-1 score of 70. It is optimized for research and development in AI-powered medical documentation, focusing on specialized medical summarization tasks.
Loading preview...
Omi-Health Sum Small: Specialized Medical SOAP Summarization
Omi-Health's Sum Small is a 3 billion parameter language model, fine-tuned from Microsoft/Phi-3-mini-4k-instruct, designed for generating SOAP (Subjective, Objective, Assessment, Plan) summaries from medical dialogues. This model is built by Omi Health and is intended for research and development in AI-powered medical documentation.
Key Capabilities & Performance
- Specialized Summarization: Specifically trained to convert medical dialogues into structured SOAP summaries.
- Competitive Performance: Achieves a ROUGE-1 score of 70, outperforming models like GPT-4 Turbo (69), Llama-3 8B Instruct (59), and GPT-3.5 (54) on this specific task.
- Efficient Architecture: Based on the Phi-3-mini-4k-instruct, providing strong performance within a smaller parameter count.
Training & Limitations
The model was trained on Omi Health's synthetic medical-dialogue-to-soap-summary dataset, comprising 10,000 synthetically generated dialogues and corresponding SOAP summaries. While promising, its training on synthetic data means it requires significant testing and adaptation for direct clinical use to meet safety standards. It is released under the MIT License, allowing for free commercial and non-commercial use.