genevera/medgemma-27b-it-heretic
The genevera/medgemma-27b-it-heretic is a 27 billion parameter instruction-tuned language model, a decensored variant of Google's MedGemma-27B-IT, built on the Gemma 3 architecture with a 32768 token context length. This model is specifically trained for performance on medical text and image comprehension, excelling in medical applications that involve text generation and medical reasoning, while demonstrating significantly reduced refusal rates compared to its original counterpart.
Loading preview...
Model Overview
This model, genevera/medgemma-27b-it-heretic, is a 27 billion parameter instruction-tuned variant of Google's MedGemma-27B-IT, built upon the Gemma 3 architecture. It has been decensored using the Heretic v1.2.0 tool, resulting in a significantly lower refusal rate (6/100) compared to the original model (99/100) while maintaining a low KL divergence of 0.0536. The model supports a substantial context length of 32768 tokens.
Key Capabilities
- Medical Text Comprehension: Optimized for medical text, question-answering, and reasoning tasks, outperforming base Gemma models on benchmarks like MedQA, MedMCQA, and MMLU Med.
- Reduced Refusals: Demonstrates a substantially lower rate of content refusals, making it more permissive for a wider range of medical inquiries.
- Gemma 3 Foundation: Leverages the robust decoder-only transformer architecture of Gemma 3.
Good for
- Healthcare AI Application Development: Serves as a strong starting point for developers building applications that require medical text generation and reasoning.
- Research in Medical LLMs: Useful for researchers exploring the impact of decensoring on medical language models and their performance characteristics.
- Text-Only Medical Use Cases: Particularly well-suited for applications focused solely on medical text, as this variant is text-only, unlike its multimodal counterparts.