laion/glm-4_6-all-puzzles-32ep-131k
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kLicense:apache-2.0Architecture:Transformer Open Weights Cold

The laion/glm-4_6-all-puzzles-32ep-131k model is an 8 billion parameter language model, fine-tuned from Qwen/Qwen3-8B. It was specifically trained on the penfever/glm-4.6-all-puzzles-32ep-131k dataset over 7 epochs. This model is optimized for puzzle-solving tasks, leveraging its fine-tuning on a specialized dataset to enhance its reasoning capabilities in this domain.

Loading preview...