laion/GLM-4.6-stackexchange-overflow-sandboxes-32eps-65k-reasoning_learning-rate_1e-06_Qwen3-32B
TEXT GENERATIONConcurrency Cost:2Model Size:32BQuant:FP8Ctx Length:32kPublished:Jan 14, 2026License:otherArchitecture:Transformer Cold

The laion/GLM-4.6-stackexchange-overflow-sandboxes-32eps-65k-reasoning_learning-rate_1e-06_Qwen3-32B model is a 32 billion parameter language model fine-tuned from Qwen/Qwen3-32B. It was specifically trained on the penfever/GLM-4.6-stackexchange-overflow-sandboxes-32eps-65k-reasoning dataset, suggesting an optimization for reasoning tasks within StackExchange and Overflow sandbox contexts. This model is designed for applications requiring nuanced understanding and generation of content related to technical Q&A forums, leveraging its 32768 token context length.

Loading preview...