TeichAI/Qwen3-4B-Instruct-2507-Claude-Opus-3-Distill
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Dec 26, 2025Architecture:Transformer0.0K Warm
TeichAI/Qwen3-4B-Instruct-2507-Claude-Opus-3-Distill is a 4 billion parameter instruction-tuned language model based on the Qwen3 architecture. It was specifically trained on a non-reasoning dataset derived from Claude Opus 3, utilizing the `NoSlop4U/opus-3-1000x` dataset. This model is optimized for tasks such as coding, agentic applications, and deep research, leveraging Unsloth for faster training. It offers a 40960 token context length, making it suitable for processing extensive inputs in its target use cases.
Loading preview...