unsloth/Qwen2.5-14B-Instruct-1M
TEXT GENERATIONConcurrency Cost:1Model Size:14.8BQuant:FP8Ctx Length:32kLicense:apache-2.0Architecture:Transformer0.0K Open Weights Warm

unsloth/Qwen2.5-14B-Instruct-1M is a 14.7 billion parameter instruction-tuned causal language model from the Qwen2.5 series, developed by Qwen Team. This model is specifically optimized for ultra-long context tasks, supporting an impressive context length of up to 1,010,000 tokens. It maintains strong performance on shorter tasks while significantly enhancing capabilities for processing and generating content over extended sequences.

Loading preview...