google/medgemma-4b-it
TEXT GENERATIONConcurrency Cost:1Model Size:4.3BQuant:BF16Ctx Length:32kPublished:May 19, 2025License:health-ai-developer-foundations Vision Architecture:Transformer0.9K Gated Warm

MedGemma-4b-it is a 4.3 billion parameter instruction-tuned variant of Google's Gemma 3 model, specifically trained for performance on medical text and image comprehension. It utilizes a SigLIP image encoder pre-trained on diverse de-identified medical data, including chest X-rays, dermatology, ophthalmology, and histopathology images. This multimodal model excels at medical applications involving text generation, visual question answering, and report generation, outperforming base Gemma 3 models on clinically relevant benchmarks. It supports a long context length of at least 128K tokens for comprehensive medical data processing.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p