aarifO1/gemma-3-4b-it-128k-presls

VISIONConcurrency Cost:1Model Size:4.3BQuant:BF16Ctx Length:32kPublished:Apr 20, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The aarifO1/gemma-3-4b-it-128k-presls is a 4.3 billion parameter Gemma 3 instruction-tuned causal language model, developed by aarifO1. This model was finetuned using Unsloth and Huggingface's TRL library, enabling 2x faster training. It is designed for general text generation tasks, leveraging its efficient training methodology for improved performance.

Loading preview...

Model Overview

The aarifO1/gemma-3-4b-it-128k-presls is a 4.3 billion parameter instruction-tuned Gemma 3 model. It was developed by aarifO1 and is licensed under Apache-2.0. This model is a finetuned version of unsloth/gemma-3-4b-it-unsloth-bnb-4bit.

Key Characteristics

  • Efficient Finetuning: The model was finetuned using Unsloth and Huggingface's TRL library, which facilitated a 2x faster training process.
  • Base Architecture: Built upon the Gemma 3 architecture, known for its capabilities in various language understanding and generation tasks.

Use Cases

This model is suitable for general text generation and instruction-following tasks, benefiting from its efficient finetuning approach. Its design makes it a candidate for applications requiring a capable language model with optimized training origins.