laion/GLM-4.6-stackexchange-overflow-sandboxes-32eps-65k-reasoning_adam-beta1_0-93_Qwen3-32B
TEXT GENERATIONConcurrency Cost:2Model Size:32BQuant:FP8Ctx Length:32kPublished:Jan 29, 2026License:otherArchitecture:Transformer Cold

The laion/GLM-4.6-stackexchange-overflow-sandboxes-32eps-65k-reasoning_adam-beta1_0-93_Qwen3-32B model is a 32 billion parameter language model, fine-tuned from Qwen/Qwen3-32B. It was specifically trained on the penfever/GLM-4.6-stackexchange-overflow-sandboxes-32eps-65k-reasoning dataset, suggesting an optimization for reasoning tasks, potentially related to technical Q&A or problem-solving. This model is designed for applications requiring advanced reasoning capabilities within a 32768 token context length.

Loading preview...