laion/GLM-4.6-stackexchange-overflow-sandboxes-32eps-65k-reasoning_lr_1e-5_Qwen3-32B
TEXT GENERATIONConcurrency Cost:2Model Size:32BQuant:FP8Ctx Length:32kPublished:Jan 12, 2026License:otherArchitecture:Transformer Cold
The laion/GLM-4.6-stackexchange-overflow-sandboxes-32eps-65k-reasoning_lr_1e-5_Qwen3-32B model is a 32 billion parameter language model, fine-tuned from Qwen/Qwen3-32B. It was trained on the penfever/GLM-4.6-stackexchange-overflow-sandboxes-32eps-65k-reasoning dataset, suggesting a specialization in reasoning tasks, potentially within technical Q&A or similar domains. With a 32768 token context length, it is designed to handle extensive input for complex problem-solving.
Loading preview...