elyza/ELYZA-Shortcut-1.0-Qwen-32B
TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Apr 30, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

ELYZA-Shortcut-1.0-Qwen-32B is a 32.8 billion parameter language model developed by ELYZA, based on Qwen/Qwen2.5-32B-Instruct with a 131072 token context length. This model is specifically post-trained to directly generate final answers by bypassing step-by-step reasoning, making it optimized for rapid, direct problem-solving. It achieves this by using problem-solution pairs derived from optimal reasoning paths, making it suitable for applications requiring immediate, concise outputs.

Loading preview...