laion/GLM-4.6-stackexchange-overflow-sandboxes-32eps-65k-reasoning_adam-beta1_0-95_Qwen3-32B
TEXT GENERATIONConcurrency Cost:2Model Size:32BQuant:FP8Ctx Length:32kPublished:Jan 29, 2026License:otherArchitecture:Transformer Cold

The laion/GLM-4.6-stackexchange-overflow-sandboxes-32eps-65k-reasoning_adam-beta1_0-95_Qwen3-32B is a 32 billion parameter language model fine-tuned from Qwen/Qwen3-32B. It was trained on the penfever/GLM-4.6-stackexchange-overflow-sandboxes-32eps-65k-reasoning dataset, suggesting an optimization for reasoning tasks within StackExchange and Overflow sandbox contexts. With a 32K context length, this model is likely specialized for processing and generating detailed, technical responses in specific Q&A environments.

Loading preview...