omi-health/sum-small
Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:4kPublished:May 5, 2024License:mitArchitecture:Transformer0.0K Open Weights Warm

omi-health/sum-small is a 4 billion parameter language model fine-tuned from Microsoft/Phi-3-mini-4k-instruct, specifically designed for generating SOAP summaries from medical dialogues. This model demonstrates superior performance in SOAP summary generation compared to larger models like GPT-4, making it highly effective for AI-powered medical documentation research. It leverages a 4096-token context length to process medical conversations and produce structured summaries.

Loading preview...

Model Overview

omi-health/sum-small is a 4 billion parameter language model developed by Omi Health, fine-tuned from the Microsoft/Phi-3-mini-4k-instruct architecture. Its primary purpose is to generate SOAP (Subjective, Objective, Assessment, Plan) summaries from medical dialogues.

Key Capabilities & Performance

This model excels at transforming medical conversations into structured SOAP notes. It was trained on Omi Health's synthetic dataset of 10,000 medical dialogues and corresponding SOAP summaries. Evaluation using ROUGE-1 metrics shows that Sum Small achieves a score of 70, outperforming:

  • GPT-4 Turbo (69)
  • LLaMA3 8B Instruct (59)
  • GPT-3.5 (54)
  • Its base model, Phi-3 3B mini 4k instruct (55)

Intended Use & Limitations

Sum Small is intended for research and development in AI-powered medical documentation. While it demonstrates strong performance, it is crucial to note that the training data is entirely synthetic. Therefore, it is not ready for direct clinical use and requires significant further validation, testing, and integration with safety guardrails before deployment in a medical setting. The model is released under the MIT License, allowing broad commercial and non-commercial use.