kairawal/Gemma-3-4B-IT-EL-SynthDolly-1A-E3

VISIONConcurrency Cost:1Model Size:4.3BQuant:BF16Ctx Length:32kPublished:Apr 10, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The kairawal/Gemma-3-4B-IT-EL-SynthDolly-1A-E3 is a 4.3 billion parameter instruction-tuned language model, finetuned from unsloth/gemma-3-4b-it. Developed by kairawal, this model leverages Unsloth and Huggingface's TRL library for accelerated training. It is designed for general language generation and understanding tasks, offering a 32768 token context length.

Loading preview...

Model Overview

The kairawal/Gemma-3-4B-IT-EL-SynthDolly-1A-E3 is an instruction-tuned language model with 4.3 billion parameters, building upon the unsloth/gemma-3-4b-it base model. It features a substantial context length of 32768 tokens, making it suitable for processing longer inputs and generating more extensive responses.

Key Characteristics

  • Base Model: Finetuned from unsloth/gemma-3-4b-it.
  • Training Efficiency: The model was trained using Unsloth and Huggingface's TRL library, enabling a 2x faster training process compared to standard methods.
  • Parameter Count: With 4.3 billion parameters, it offers a balance between performance and computational efficiency.
  • Context Window: Supports a 32768 token context length, beneficial for tasks requiring extensive contextual understanding.

Potential Use Cases

This model is well-suited for a variety of natural language processing tasks, including:

  • Instruction-following and conversational AI.
  • Text generation, summarization, and question answering.
  • Applications requiring a larger context window for improved coherence and relevance.

License

The model is released under the Apache 2.0 license, allowing for broad usage and distribution.