laion/GLM-4.6-stackexchange-overflow-sandboxes-32eps-65k-reasoning_num-train-epochs_7.0_Qwen3-32B
TEXT GENERATIONConcurrency Cost:2Model Size:32BQuant:FP8Ctx Length:32kPublished:Jan 11, 2026License:otherArchitecture:Transformer Cold

The laion/GLM-4.6-stackexchange-overflow-sandboxes-32eps-65k-reasoning_num-train-epochs_7.0_Qwen3-32B is a 32 billion parameter language model fine-tuned from Qwen3-32B. It was specifically trained on the penfever/GLM-4.6-stackexchange-overflow-sandboxes-32eps-65k-reasoning dataset. This model is optimized for tasks related to reasoning, likely within the domain of StackExchange and Overflow sandboxes, leveraging its substantial parameter count and 32768 token context length.

Loading preview...