integration1857/prescription-simplifier-mistral7b
integration1857/prescription-simplifier-mistral7b is a fine-tuned Mistral-7B-Instruct-v0.3 model developed by Madhukar Kumar. This causal language model is specifically trained using QLoRA to convert complex medical prescriptions into simple, patient-friendly explanations. It excels at improving patient health literacy by simplifying drug instructions and is suitable for integration into healthcare applications.
Loading preview...
Model Overview
This model, developed by Madhukar Kumar, is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.3 designed to simplify medical prescriptions. It was fine-tuned using QLoRA (4-bit NF4 quantization with LoRA adapters) on Kaggle T4×2 GPUs.
Key Capabilities
- Prescription Simplification: Converts complex medical prescription text into easy-to-understand, plain-language explanations for patients.
- Patient Health Literacy: Aims to improve patient understanding of their medications.
- Integration Ready: Can be used via the HuggingFace Inference API or integrated into various healthcare applications, pharmacy portals, or patient-facing tools.
Training Details
The model was trained on a small, hand-crafted dataset of 8 prescription-to-explanation pairs covering common drugs like Amoxicillin, Metformin, and Lisinopril. Despite the limited training data, it achieved a ROUGE-1 score of 0.51, indicating reasonable performance. The training process was efficient, taking only 4.6 minutes on Kaggle T4×2 GPUs, with 41.94 million trainable parameters.
Limitations and Recommendations
Due to the very small training set, the model has limited coverage and may produce inaccurate explanations for uncommon medications. It does not account for patient-specific factors like allergies or drug interactions and is English-only. It is not intended for clinical decision-making and should always be used with a medical disclaimer and professional medical guidance. Further fine-tuning on larger, more diverse datasets is recommended for improved accuracy and generalization.