k111191114/gemma-3-finetune
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Dec 2, 2025License:apache-2.0Architecture:Transformer Open Weights Warm

The k111191114/gemma-3-finetune is a 1 billion parameter instruction-tuned causal language model, finetuned from unsloth/gemma-3-1b-it. Developed by k111191114, this model was trained using Unsloth and Huggingface's TRL library, achieving 2x faster training. It is designed for general language generation tasks, leveraging its efficient training methodology.

Loading preview...