TeichAI/Qwen3-1.7B-Gemini-2.5-Flash-Lite-Preview-Distill
TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Nov 12, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

TeichAI/Qwen3-1.7B-Gemini-2.5-Flash-Lite-Preview-Distill is a 2 billion parameter language model developed by TeichAI, fine-tuned from unsloth/Qwen3-1.7B-unsloth-bnb-4bit. It was trained on 1000 examples from Gemini 2.5 Flash Lite Preview 09-2025, leveraging Unsloth and Huggingface's TRL library for 2x faster training. This model is optimized for tasks requiring efficient processing within a 40960 token context length, making it suitable for applications benefiting from distilled knowledge from a larger, more capable model.

Loading preview...