shadowlilac/OpenGemini-Flash-Mini-1.7B
TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Jan 4, 2026License:apache-2.0Architecture:Transformer Open Weights Warm
OpenGemini-Flash-Mini-1.7B is a 1.7 billion parameter causal language model developed by shadowlilac, fine-tuned from unsloth/Qwen3-1.7B. This model was trained using Unsloth and Huggingface's TRL library, enabling 2x faster fine-tuning. It is designed for efficient performance in applications requiring a compact yet capable language model.
Loading preview...
OpenGemini-Flash-Mini-1.7B Overview
OpenGemini-Flash-Mini-1.7B is a compact 1.7 billion parameter language model developed by shadowlilac. It is a fine-tuned variant of the unsloth/Qwen3-1.7B base model, leveraging the Unsloth library and Huggingface's TRL for accelerated training.
Key Characteristics
- Base Model: Fine-tuned from
unsloth/Qwen3-1.7B. - Training Efficiency: Utilizes Unsloth and Huggingface's TRL library, resulting in a 2x speed improvement during the fine-tuning process.
- License: Distributed under the Apache-2.0 license, allowing for broad usage and modification.
Good For
- Resource-constrained environments: Its 1.7 billion parameters make it suitable for deployment where computational resources are limited.
- Applications requiring fast fine-tuning: The use of Unsloth suggests it's optimized for quick adaptation to specific tasks or datasets.
- General language understanding and generation: As a fine-tuned Qwen3 model, it inherits capabilities for various NLP tasks.