Haeryz/gemma-3-finetune-Nizam
VISIONConcurrency Cost:1Model Size:4.3BQuant:BF16Ctx Length:32kPublished:Feb 2, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
Haeryz/gemma-3-finetune-Nizam is a 4.3 billion parameter instruction-tuned language model developed by Haeryz, finetuned from unsloth/gemma-3-4b-it. This model was trained using Unsloth and Huggingface's TRL library, achieving a 2x speed improvement during the finetuning process. It is designed for general language generation tasks, leveraging the efficiency gains from its optimized training methodology.
Loading preview...
Overview
Haeryz/gemma-3-finetune-Nizam is a 4.3 billion parameter language model, developed by Haeryz, that has been instruction-tuned. It is based on the unsloth/gemma-3-4b-it model and was finetuned with significant efficiency improvements.
Key Capabilities
- Optimized Training: This model was finetuned 2x faster using the Unsloth library in conjunction with Huggingface's TRL library, indicating a focus on efficient model development and deployment.
- Gemma-3 Base: Built upon the Gemma-3 architecture, it inherits the foundational capabilities of that model family, suitable for a range of natural language processing tasks.
Good For
- Efficient Deployment: Developers looking for a Gemma-3 based model that has undergone an optimized finetuning process, potentially leading to faster iteration cycles.
- General Language Tasks: Suitable for various instruction-following applications where a 4.3 billion parameter model is appropriate.
- Research into Finetuning: Useful for those interested in models trained with Unsloth for speed and efficiency.