laion/exp-psu-stackoverflow-31K_glm_4_7_traces
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Feb 23, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The laion/exp-psu-stackoverflow-31K_glm_4_7_traces model is an 8 billion parameter language model, fine-tuned from Qwen/Qwen3-8B. It was trained on the /data/cat/ws/befe330h-befe330h-otagent/huggingface/hub/datasets--DCAgent--exp-psu-stackoverflow-31K_glm_4.7_traces/snapshots/5b1d8b21707162015662fa506ad12998155f4ab9_thinking_preprocessed dataset, suggesting a specialization in processing or generating content related to Stack Overflow data or similar technical Q&A. This model is likely optimized for tasks requiring understanding and generation of technical discussions or code-related information, leveraging its 32768 token context length.

Loading preview...