laion/stackexchange-tezos-sandboxes_glm_4_6_traces_together_again
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Dec 25, 2025License:apache-2.0Architecture:Transformer Open Weights Cold

The laion/stackexchange-tezos-sandboxes_glm_4_6_traces_together_again model is an 8 billion parameter language model, fine-tuned from Qwen/Qwen3-8B. It was trained on the DCAgent/stackexchange-tezos-sandboxes_glm_4.6_traces_together_again dataset, suggesting specialization in content related to Tezos sandboxes and potentially GLM traces. This model is optimized for tasks within the specific domain of its fine-tuning data.

Loading preview...