kairawal/Gemma-3-4B-IT-ES-SynthDolly-1A-E1

VISIONConcurrency Cost:1Model Size:4.3BQuant:BF16Ctx Length:32kPublished:Apr 10, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The kairawal/Gemma-3-4B-IT-ES-SynthDolly-1A-E1 is a 4.3 billion parameter instruction-tuned language model, finetuned by kairawal from unsloth/gemma-3-4b-it. This model leverages Unsloth and Huggingface's TRL library for accelerated training, offering a 32768 token context length. It is designed for general instruction-following tasks, benefiting from efficient training methodologies.

Loading preview...

Model Overview

The kairawal/Gemma-3-4B-IT-ES-SynthDolly-1A-E1 is a 4.3 billion parameter instruction-tuned model, developed by kairawal. It is finetuned from the unsloth/gemma-3-4b-it base model and operates under an Apache-2.0 license.

Key Characteristics

  • Efficient Training: This model was trained significantly faster (2x) by utilizing the Unsloth library in conjunction with Huggingface's TRL library.
  • Base Architecture: Built upon the Gemma-3-4B-IT architecture, providing a robust foundation for instruction-following.
  • Context Length: Supports a substantial context window of 32768 tokens, allowing for processing longer inputs and maintaining conversational coherence over extended interactions.

Use Cases

This model is suitable for a variety of general instruction-following applications where a balance between model size and performance is desired, particularly benefiting from its optimized training process.