QIAIUNCC/EYE-Llama_qa
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Apr 23, 2024License:mitArchitecture:Transformer Open Weights Cold

EYE-Llama_qa is a 7 billion parameter large language model developed by QIAIUNCC, built on the Llama 2 architecture with a 4096-token context length. It is specifically fine-tuned on ophthalmic datasets (EYE-lit and EYE-QA) to excel at question-answering within the field of ophthalmology. Its primary use case is to support clinical decision-making, medical education, and research in ophthalmology, distinguishing it from general-purpose LLMs.

Loading preview...