unsloth/medgemma-27b-text-it

TEXT GENERATIONConcurrency Cost:2Model Size:27BQuant:FP8Ctx Length:32kPublished:May 20, 2025License:health-ai-developer-foundationsArchitecture:Transformer0.0K Cold

MedGemma 27B Text-IT is a 27 billion parameter instruction-tuned causal language model developed by Google, built upon the Gemma 3 architecture. This text-only variant is specifically trained on medical text and optimized for inference-time computation, excelling in medical knowledge and reasoning tasks. It is designed to accelerate the development of healthcare-based AI applications by providing strong baseline medical text comprehension.

Loading preview...

MedGemma 27B Text-IT: Specialized Medical LLM

MedGemma 27B Text-IT, developed by Google, is a 27 billion parameter instruction-tuned language model based on the Gemma 3 architecture. This variant is exclusively trained on medical text, distinguishing it from the multimodal 4B version, and is optimized for efficient inference. It aims to serve as a foundational model for building healthcare AI applications.

Key Capabilities & Performance

  • Medical Text Comprehension: Demonstrates superior performance on medical knowledge and reasoning benchmarks compared to its base Gemma 3 model.
  • Benchmarked Excellence: Outperforms Gemma 3 27B across various medical benchmarks, including MedQA (89.8% vs 74.9%), MedMCQA (74.2% vs 62.6%), and MMLU Med (87.0% vs 83.3%).
  • Instruction-Tuned: Available only as an instruction-tuned model, ready for direct application in medical Q&A and text generation.
  • Long Context Support: Supports a context length of at least 128K tokens, enabling processing of extensive medical documents.

Intended Use & Limitations

  • Good for: Developers in life sciences and healthcare seeking a strong baseline for medical text comprehension. It is ideal for fine-tuning with proprietary data for specific tasks like medical question answering or report generation.
  • Not for: Direct clinical diagnosis, patient management, or treatment recommendations without further validation and adaptation. Outputs require independent verification and clinical correlation. It has not been evaluated for multi-turn applications or use cases involving multiple images.