magichampz/gemma-4b-hptuned
VISIONConcurrency Cost:1Model Size:4.3BQuant:BF16Ctx Length:32kPublished:Jun 2, 2025License:apache-2.0Architecture:Transformer Open Weights Cold
magichampz/gemma-4b-hptuned is a Gemma-3-4b-it-based causal language model developed by magichampz, fine-tuned using Unsloth and Huggingface's TRL library. This model is optimized for faster training, leveraging Unsloth's capabilities for improved efficiency. It is suitable for applications requiring a Gemma-based model with enhanced training speed.
Loading preview...
Model Overview
magichampz/gemma-4b-hptuned is a fine-tuned language model based on the unsloth/gemma-3-4b-it-unsloth-bnb-4bit architecture. Developed by magichampz, this model leverages the Unsloth library in conjunction with Huggingface's TRL library for its training process.
Key Capabilities
- Faster Training: The model was trained with a focus on speed, achieving 2x faster training times by utilizing Unsloth's optimizations.
- Gemma-3-4b-it Foundation: Built upon the Gemma-3-4b-it instruction-tuned model, inheriting its core language understanding and generation capabilities.
- Efficient Fine-tuning: The use of Unsloth and TRL suggests an emphasis on efficient resource utilization during the fine-tuning phase.
Good For
- Developers seeking a Gemma-based model that has undergone an optimized and accelerated fine-tuning process.
- Applications where training efficiency and speed are critical considerations for deploying instruction-tuned models.
License
This model is released under the Apache-2.0 license.