elyza/ELYZA-Shortcut-1.0-Qwen-7B
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Apr 30, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

ELYZA-Shortcut-1.0-Qwen-7B is a 7.6 billion parameter language model developed by elyza, based on Qwen/Qwen2.5-7B-Instruct, with a 131072 token context length. This model is specifically post-trained to bypass step-by-step reasoning, directly generating final answers from problem-solution pairs. It is optimized for direct answer generation in scenarios where explicit reasoning steps are not required, making it suitable for efficient, non-reasoning task execution.

Loading preview...