iamaber/mistral-7b-pubmedqa-lora-plus
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Apr 5, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The iamaber/mistral-7b-pubmedqa-lora-plus model is a 7 billion parameter Mistral-7B-Instruct-v0.3 variant, fine-tuned using LoRA+ on the PubMedQA dataset. This model is specifically optimized for medical question answering tasks, demonstrating a PubMedQA accuracy of 0.4500. It is designed for applications requiring specialized knowledge in the biomedical domain, particularly for answering 'yes', 'no', or 'maybe' questions based on medical literature.

Loading preview...