kairawal/Gemma-3-1B-IT-TL-SynthDolly-1A-E8

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Apr 5, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

The kairawal/Gemma-3-1B-IT-TL-SynthDolly-1A-E8 is a 1 billion parameter instruction-tuned causal language model developed by kairawal, fine-tuned from unsloth/gemma-3-1b-it. This model was trained using Unsloth and Huggingface's TRL library, enabling faster training times. With a context length of 32768 tokens, it is designed for general instruction-following tasks.

Loading preview...

Model Overview

The kairawal/Gemma-3-1B-IT-TL-SynthDolly-1A-E8 is a 1 billion parameter instruction-tuned language model developed by kairawal. It is fine-tuned from the unsloth/gemma-3-1b-it base model, leveraging the Gemma architecture.

Key Characteristics

  • Parameter Count: 1 billion parameters, offering a compact yet capable model size.
  • Context Length: Supports a substantial context window of 32768 tokens, allowing for processing longer inputs and generating more coherent responses.
  • Training Efficiency: The model was trained using Unsloth and Huggingface's TRL library, which facilitated a 2x faster fine-tuning process.

Potential Use Cases

This model is suitable for various instruction-following applications where a smaller, efficient model with a good context window is beneficial. Its training methodology suggests potential for rapid deployment and iteration in development workflows.