malhajar/meditron-7b-chat
malhajar/meditron-7b-chat is a 7 billion parameter instruction-tuned causal language model, fine-tuned by Mohamad Alhajar from epfl-llm/meditron-7b. This model specializes in answering medical information queries, leveraging its base model's medical knowledge. It is optimized for English language tasks and achieves an average score of 49.59 on the Open LLM Leaderboard benchmarks.
Loading preview...
malhajar/meditron-7b-chat: Medical Information LLM
This model is a 7 billion parameter instruction-tuned language model, developed by Mohamad Alhajar. It is a fine-tuned version of epfl-llm/meditron-7b, specifically trained using SFT (Supervised Fine-Tuning) on the Alpaca dataset to enhance its conversational capabilities.
Key Capabilities
- Medical Information Retrieval: Designed to answer explicit questions related to medicine, building upon the specialized knowledge of its base model.
- English Language Support: Primarily focused on processing and generating responses in English.
- Instruction Following: Fine-tuned to follow instructions effectively, making it suitable for chat-based interactions.
Performance Benchmarks
Evaluated on the Open LLM Leaderboard, malhajar/meditron-7b-chat demonstrates competitive performance for its size:
- Average Score: 49.59
- AI2 Reasoning Challenge (25-Shot): 50.77
- HellaSwag (10-Shot): 75.37
- MMLU (5-Shot): 40.49
- TruthfulQA (0-shot): 48.56
- Winogrande (5-shot): 73.16
- GSM8k (5-shot): 9.17
Good For
- Developers requiring an LLM for medical question-answering applications.
- Use cases where a specialized, instruction-tuned model with a focus on medical knowledge is beneficial.
- Integration into systems that need to provide informative responses on health-related topics.