laion/exp-psu-stackoverflow-1K_glm_4_7_traces
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Jan 28, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The laion/exp-psu-stackoverflow-1K_glm_4_7_traces model is an 8 billion parameter language model, fine-tuned from Qwen/Qwen3-8B. It was trained on the DCAgent/exp-psu-stackoverflow-1K_glm_4.7_traces dataset, suggesting a specialization in processing and generating content related to Stack Overflow data. This model is designed for tasks benefiting from knowledge derived from programming Q&A forums, offering a 32768 token context length.

Loading preview...