laion/GLM-4.6-stackexchange-overflow-sandboxes-32eps-65k-reasoning_warmup-ratio_0-05_Qwen3-32B
TEXT GENERATIONConcurrency Cost:2Model Size:32BQuant:FP8Ctx Length:32kPublished:Jan 20, 2026License:otherArchitecture:Transformer Cold
The laion/GLM-4.6-stackexchange-overflow-sandboxes-32eps-65k-reasoning_warmup-ratio_0-05_Qwen3-32B is a 32 billion parameter language model, fine-tuned from Qwen/Qwen3-32B. This model is specifically trained on the penfever/GLM-4.6-stackexchange-overflow-sandboxes-32eps-65k-reasoning dataset, indicating a specialization in reasoning tasks within a StackExchange-like context. It is designed for applications requiring nuanced understanding and generation of content similar to technical Q&A forums, leveraging its 32768 token context length.
Loading preview...