z8086486/GCCL-Medical-LLM-Qwen3-4B
Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Jan 15, 2026Architecture:Transformer Warm

The z8086486/GCCL-Medical-LLM-Qwen3-4B is a 4 billion parameter language model developed by InSeo Song, fine-tuned from Qwen/Qwen3-4B-Instruct-2507. This model is specifically optimized for medical question answering and achieves an average score of 60.62% across key medical benchmarks like PubMedQA, MedMCQA, MedQA, and CareQA. With a context length of 40960 tokens, it is designed for applications requiring specialized medical knowledge and improved accuracy in healthcare-related queries.

Loading preview...

z8086486/GCCL-Medical-LLM-Qwen3-4B: Medical Domain LLM

This model, developed by InSeo Song, is a 4 billion parameter language model specifically fine-tuned for medical applications. It is built upon the Qwen/Qwen3-4B-Instruct-2507 base model, enhancing its performance significantly in medical question-answering tasks.

Key Capabilities & Performance

The GCCL-Medical-LLM-Qwen3-4B demonstrates strong performance across several medical benchmarks, showcasing its specialization:

  • PubMedQA: 63.40%
  • MedMCQA: 51.35%
  • MedQA: 60.01%
  • CareQA: 67.74%

These scores result in an average performance of 60.62% across these critical medical datasets. This represents a substantial improvement over its base model, which averaged 20.01% on the same metrics, highlighting the effectiveness of its medical domain fine-tuning.

Good For

  • Medical Question Answering: Excels in providing answers to medical queries based on its specialized training.
  • Healthcare Applications: Suitable for integration into systems requiring accurate and contextually relevant medical information.
  • Research & Development: Can be used as a foundation for further research in medical AI, leveraging its enhanced domain knowledge.