laion/GLM-4.6-stackexchange-overflow-sandboxes-32eps-65k-reasoning_num-train-epochs_8-0_Qwen3-32B
TEXT GENERATIONConcurrency Cost:2Model Size:32BQuant:FP8Ctx Length:32kPublished:Jan 20, 2026License:otherArchitecture:Transformer Cold

The laion/GLM-4.6-stackexchange-overflow-sandboxes-32eps-65k-reasoning_num-train-epochs_8-0_Qwen3-32B is a 32 billion parameter language model, fine-tuned from Qwen/Qwen3-32B. This model is specifically optimized for reasoning tasks, leveraging a dataset derived from StackExchange and Overflow sandboxes. Its training focuses on enhancing its ability to process and generate logical responses, making it suitable for complex problem-solving and analytical applications.

Loading preview...