AiHub4MSRH-Hash/hash-MedGemma-4B-16bit-eng-text-it

VISIONConcurrency Cost:1Model Size:4.3BQuant:BF16Ctx Length:32kPublished:Feb 9, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

AiHub4MSRH-Hash/hash-MedGemma-4B-16bit-eng-text-it is a 4.3 billion parameter Gemma3 model developed by AiHub4MSRH-Hash, fine-tuned from unsloth/medgemma-4b-it. This model was trained using Unsloth and Huggingface's TRL library, enabling 2x faster training. It is designed for English text processing, leveraging its 32768 token context length for various language tasks.

Loading preview...

Model Overview

AiHub4MSRH-Hash/hash-MedGemma-4B-16bit-eng-text-it is a 4.3 billion parameter Gemma3 model, developed by AiHub4MSRH-Hash. It is a fine-tuned version of the unsloth/medgemma-4b-it base model, optimized for English text processing.

Key Characteristics

  • Architecture: Based on the Gemma3 model family.
  • Parameter Count: Features 4.3 billion parameters, offering a balance between performance and computational efficiency.
  • Context Length: Supports a substantial context window of 32768 tokens, beneficial for handling longer texts and complex queries.
  • Training Efficiency: This model was trained with a focus on speed, utilizing Unsloth and Huggingface's TRL library, which enabled a 2x faster training process compared to standard methods.

Intended Use Cases

This model is well-suited for applications requiring robust English text understanding and generation, particularly where the efficiency of a 4.3B parameter model with a large context window is advantageous. Its fine-tuned nature suggests potential for specialized applications building upon its base model's capabilities.