TeichAI/Qwen3-4B-Instruct-2507-Polaris-Alpha-Distill
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Nov 13, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Warm
TeichAI/Qwen3-4B-Instruct-2507-Polaris-Alpha-Distill is a 4 billion parameter Qwen3-based instruction-tuned language model developed by TeichAI. It was fine-tuned using Unsloth and Huggingface's TRL library on 1,000 examples from Polaris Alpha, an early snapshot of GPT-5.1. This model is specifically designed as a non-reasoning model, focusing on direct instruction following rather than complex logical inference. It offers a 40960 token context length, making it suitable for tasks requiring extensive input processing.
Loading preview...