laion/glm-4_6-freelancer-32ep-131k-torch
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kLicense:apache-2.0Architecture:Transformer Open Weights Cold
The laion/glm-4_6-freelancer-32ep-131k-torch model is an 8 billion parameter language model fine-tuned from Qwen/Qwen3-8B. This model was trained on the penfever/glm-4.6-freelancer-32ep-131k-torch dataset over 7 epochs with a 32768 token context length. It is designed for general language generation tasks, leveraging the Qwen3 architecture for broad applicability.
Loading preview...