johnsnowlabs/JSL-Med-Sft-Llama-3-8B
The JSL-Med-Sft-Llama-3-8B model, developed by John Snow Labs, is an 8 billion parameter language model based on the Llama 3 architecture. It is specifically fine-tuned for medical and clinical natural language processing tasks. This model demonstrates strong performance across various medical question-answering and knowledge-based evaluations, making it suitable for applications requiring specialized medical understanding.
Loading preview...
Overview
The JSL-Med-Sft-Llama-3-8B is an 8 billion parameter language model developed by John Snow Labs, built upon the Llama 3 architecture. This model is specifically designed and fine-tuned for applications within the medical domain, focusing on tasks that require deep understanding and generation of medical text.
Key Capabilities
- Medical Question Answering: Excels in answering questions related to medical knowledge, as evidenced by its performance on datasets like MedMCQA, MedQA, and PubMedQA.
- Clinical Knowledge Processing: Demonstrates proficiency in understanding and processing clinical information, scoring well on MMLU subtasks such as clinical knowledge, college medicine, and professional medicine.
- Anatomy and Biology Understanding: Shows strong capabilities in specialized biological and anatomical contexts, including MMLU's anatomy and college biology sections.
Evaluation Highlights
The model has been rigorously evaluated on several medical benchmarks, achieving notable accuracy scores:
- MedMCQA: 0.5752 accuracy
- MedQA (4 options): 0.5970 accuracy
- MMLU - Clinical Knowledge: 0.7472 accuracy
- MMLU - Medical Genetics: 0.8300 accuracy
- PubMedQA: 0.7480 accuracy
Good For
- Developing AI assistants for medical professionals.
- Building systems for medical information retrieval and summarization.
- Applications requiring specialized medical knowledge and reasoning.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.