2kfi/MedGemma-4B-it-finetuned
VISIONConcurrency Cost:1Model Size:4.3BQuant:BF16Ctx Length:32kPublished:Jan 22, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Cold
MedGemma-4B-it-finetuned is a 4.3 billion parameter instruction-tuned causal language model developed by 2kfi. This model is a finetuned version of unsloth/medgemma-4b-it-unsloth-bnb-4bit, optimized for faster training using Unsloth and Huggingface's TRL library. It features a 32768 token context length, making it suitable for applications requiring processing of longer sequences. Its primary differentiator is its efficient training methodology, allowing for rapid iteration and deployment.
Loading preview...