mlkro/gemma-3-1b-it-GA-SynthDolly-2A

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Nov 30, 2025License:apache-2.0Architecture:Transformer Open Weights Warm

The mlkro/gemma-3-1b-it-GA-SynthDolly-2A is a 1 billion parameter instruction-tuned Gemma model developed by mlkro, fine-tuned from unsloth/gemma-3-1b-it. This model was trained using Unsloth and Hugging Face's TRL library, enabling faster training. It is designed for general instruction-following tasks, leveraging its 32768 token context length for processing longer inputs.

Loading preview...

Model Overview

The mlkro/gemma-3-1b-it-GA-SynthDolly-2A is a 1 billion parameter instruction-tuned language model, developed by mlkro. It is based on the Gemma architecture and was fine-tuned from the unsloth/gemma-3-1b-it model. A notable aspect of its development is the use of Unsloth and Hugging Face's TRL library, which facilitated a 2x faster training process.

Key Characteristics

  • Architecture: Gemma-based, instruction-tuned.
  • Parameter Count: 1 billion parameters.
  • Training: Fine-tuned using Unsloth and Hugging Face's TRL library for optimized training speed.
  • Context Length: Supports a context length of 32768 tokens.

Intended Use

This model is suitable for general instruction-following applications, benefiting from its instruction-tuned nature and extended context window. Its efficient training methodology suggests potential for rapid iteration and deployment in various NLP tasks.