vishalkm/medalpaca-7b
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kLicense:ccArchitecture:Transformer Cold

vishalkm/medalpaca-7b is a 7 billion parameter large language model, fine-tuned from LLaMA, specifically designed for medical domain tasks. It excels in medical question-answering and dialogue, leveraging a diverse dataset including Anki flashcards, Wikidoc, StackExchange, and ChatDoctor. This model is optimized for the knowledge level of medical students, making it suitable for specialized medical information retrieval.

Loading preview...