kairawal/Gemma-3-1B-IT-HI-SynthDolly-1A-E8

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Apr 5, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

The kairawal/Gemma-3-1B-IT-HI-SynthDolly-1A-E8 is a 1 billion parameter instruction-tuned causal language model developed by kairawal. This model is finetuned from unsloth/gemma-3-1b-it and was trained using Unsloth and Huggingface's TRL library, enabling 2x faster training. It is designed for general instruction-following tasks, leveraging its efficient training methodology.

Loading preview...

Model Overview

The kairawal/Gemma-3-1B-IT-HI-SynthDolly-1A-E8 is a 1 billion parameter instruction-tuned language model. It was developed by kairawal and is based on the unsloth/gemma-3-1b-it architecture, indicating its foundation in the Gemma family of models.

Key Characteristics

  • Efficient Training: This model was finetuned using Unsloth and Huggingface's TRL library, which facilitated a 2x faster training process.
  • Parameter Count: It features 1 billion parameters, making it a relatively compact model suitable for various applications where computational resources are a consideration.
  • Context Length: The model supports a context length of 32768 tokens, allowing it to process and generate longer sequences of text.

Use Cases

This model is suitable for general instruction-following tasks, benefiting from its instruction-tuned nature. Its efficient training and moderate size make it a good candidate for applications requiring a balance between performance and resource utilization.