Bisher/gemma-3-4b-sadeedTashkeela-finetune-merged-f16

TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Aug 23, 2025License:apache-2.0Architecture:Transformer Open Weights Cold

Bisher/gemma-3-4b-sadeedTashkeela-finetune-merged-f16 is a 1 billion parameter language model, finetuned by Bisher from unsloth/gemma-3-1b-it. This model was trained using Unsloth and Huggingface's TRL library, achieving 2x faster training. It is designed for general language tasks, leveraging the Gemma 3 architecture for efficient performance.

Loading preview...

Overview

Bisher/gemma-3-4b-sadeedTashkeela-finetune-merged-f16 is a 1 billion parameter language model developed by Bisher. It is a finetuned version of the unsloth/gemma-3-1b-it model, leveraging the Gemma 3 architecture.

Key Characteristics

  • Base Model: Finetuned from unsloth/gemma-3-1b-it.
  • Training Efficiency: The model was trained significantly faster, specifically 2x faster, by utilizing the Unsloth library in conjunction with Huggingface's TRL library.
  • License: Distributed under the Apache-2.0 license.

Use Cases

This model is suitable for various general language processing tasks where the Gemma 3 architecture's efficiency and the benefits of Unsloth's accelerated training can be advantageous. Its 1 billion parameter size makes it a good candidate for applications requiring a balance between performance and computational resources.