GLM-4.7-Flash is a 30 billion parameter Mixture-of-Experts (MoE) model developed by zai-org, designed for efficient and high-performance lightweight deployment. It demonstrates strong capabilities across various benchmarks, particularly excelling in agentic tasks, reasoning, and coding. This model offers a balanced solution for performance and efficiency in the 30B class.
No reviews yet. Be the first to review!