aparnavirtuonai/mistral-medqa

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Mar 18, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

aparnavirtuonai/mistral-medqa is a 7 billion parameter language model fine-tuned from mistralai/Mistral-7B-v0.1, specifically optimized for medical question-answering tasks. This model leverages Supervised Fine-Tuning (SFT) with LoRA on the MedQA dataset to enhance its ability to process and respond to medical queries. It is designed for applications requiring domain-specific reasoning in healthcare, education, and research, demonstrating stable general performance and baseline medical domain adaptation.

Loading preview...

aparnavirtuonai/mistral-medqa: Medical QA Fine-Tuned Model

This model is a 7 billion parameter language model, fine-tuned from the mistralai/Mistral-7B-v0.1 base model. Its primary purpose is to excel in medical question-answering by leveraging domain-specific knowledge.

Key Capabilities

  • Medical Question Answering: Specifically trained to understand and respond to medical queries.
  • Healthcare Chatbot Systems: Suitable for integration into conversational AI for healthcare contexts.
  • Educational and Research: Can be used for learning and investigative purposes within the medical field.
  • Fine-Tuned Performance: Achieves 0.70 accuracy on BoolQ (General QA) and 0.69 accuracy on PubMedQA (Medical QA), indicating stable general performance alongside domain adaptation.

Training Details

The model underwent Supervised Fine-Tuning (SFT) using LoRA (Low-Rank Adaptation) with PEFT. It was trained for 3 epochs on the MedQA dataset, utilizing a learning rate of 2e-5 and a maximum sequence length of 512 tokens.

Good For

  • Developing applications that require accurate responses to medical questions.
  • Building specialized chatbots for healthcare information.
  • Supporting medical education and research initiatives.

Note: This model is not intended for real-world medical diagnosis without professional supervision.