kairawal/Gemma-3-4B-IT-ES-SynthDolly-1A-E3

VISIONConcurrency Cost:1Model Size:4.3BQuant:BF16Ctx Length:32kPublished:Apr 12, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

The kairawal/Gemma-3-4B-IT-ES-SynthDolly-1A-E3 is a 4.3 billion parameter instruction-tuned language model, developed by kairawal and finetuned from unsloth/gemma-3-4b-it. This model was trained using Unsloth and Huggingface's TRL library, emphasizing efficient training. It is designed for general instruction-following tasks, leveraging the Gemma architecture for its capabilities.

Loading preview...

Model Overview

The kairawal/Gemma-3-4B-IT-ES-SynthDolly-1A-E3 is a 4.3 billion parameter instruction-tuned model, building upon the Gemma architecture. Developed by kairawal, this model was finetuned from unsloth/gemma-3-4b-it.

Key Characteristics

  • Architecture: Based on the Gemma model family.
  • Parameter Count: 4.3 billion parameters.
  • Training Efficiency: The model was trained with Unsloth and Huggingface's TRL library, indicating a focus on accelerated and efficient finetuning processes.
  • Context Length: Supports a context window of 32768 tokens.

Intended Use Cases

This model is suitable for various instruction-following applications, benefiting from its Gemma base and efficient finetuning. Its design suggests utility in scenarios where a moderately sized, instruction-tuned model with a substantial context window is required.