zai-org/GLM-4.7

Warm
Public
358B
FP8
32768
License: mit
Hugging Face
Overview

GLM-4.7: Your Advanced Coding and Reasoning Partner

GLM-4.7, developed by zai-org, is a powerful 358 billion parameter language model with a 32768 token context length, designed to be a highly capable coding and reasoning assistant. It introduces substantial improvements over its predecessor, GLM-4.6, particularly in agentic coding, terminal-based tasks, and complex problem-solving.

Key Capabilities

  • Enhanced Coding: Achieves significant gains in multilingual agentic coding and terminal-based tasks, with scores of 73.8% (+5.8%) on SWE-bench, 66.7% (+12.9%) on SWE-bench Multilingual, and 41% (+16.5%) on Terminal Bench 2.0. It supports advanced thinking before acting in agent frameworks.
  • Improved UI Generation: Excels at producing cleaner, more modern webpages and better-looking slides with accurate layouts.
  • Advanced Tool Usage: Demonstrates substantial improvements in tool-using capabilities, with better performance on benchmarks like τ²-Bench and web browsing via BrowseComp.
  • Complex Reasoning: Delivers a significant boost in mathematical and reasoning abilities, scoring 42.8% (+12.4%) on the HLE (Humanity’s Last Exam) benchmark compared to GLM-4.6.
  • Interleaved & Preserved Thinking: Features "Interleaved Thinking" for improved instruction following and "Preserved Thinking" to retain reasoning across multi-turn conversations, enhancing stability and control for complex, long-horizon tasks. "Turn-level Thinking" allows dynamic control over reasoning for latency/accuracy trade-offs.

Good For

  • Developers requiring a robust model for agentic coding and terminal-based automation.
  • Applications demanding complex mathematical and logical reasoning.
  • Generating high-quality UI elements for web pages and presentations.
  • Scenarios involving advanced tool integration and web browsing tasks.
  • Use cases benefiting from consistent and stable multi-turn interactions through preserved reasoning.