zycalice/Qwen2.5-32B-Instruct_auto_all_resp
TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Feb 20, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The zycalice/Qwen2.5-32B-Instruct_auto_all_resp is a 32 billion parameter instruction-tuned Qwen2.5 model developed by zycalice. It was fine-tuned from unsloth/Qwen2.5-32B-Instruct using Unsloth and Huggingface's TRL library, achieving 2x faster training. This model is optimized for instruction-following tasks, leveraging efficient training techniques for enhanced performance.

Loading preview...