zai-org/GLM-4.7-Flash
TEXT GENERATIONConcurrency Cost:2Model Size:30BQuant:FP8Ctx Length:32kPublished:Jan 19, 2026License:mitArchitecture:Transformer1.7K Open Weights Warm

GLM-4.7-Flash is a 30 billion parameter Mixture-of-Experts (MoE) model developed by zai-org, designed for efficient and high-performance lightweight deployment. It demonstrates strong capabilities across various benchmarks, particularly excelling in agentic tasks, reasoning, and coding. This model offers a balanced solution for performance and efficiency in the 30B class.

Loading preview...