TeichAI/Qwen3-8B-Gemini-3-Pro-Preview-Distill-1000x
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Dec 7, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

TeichAI/Qwen3-8B-Gemini-3-Pro-Preview-Distill-1000x is an 8 billion parameter Qwen3 model developed by TeichAI, fine-tuned from unsloth/Qwen3-8B-unsloth-bnb-4bit. This model was trained using Unsloth and Huggingface's TRL library, achieving 2x faster training speeds. With a 32768 token context length, it is optimized for efficient performance in various language generation tasks.

Loading preview...