zai-org/GLM-Z1-32B-0414

Hugging Face
TEXT GENERATIONConcurrency Cost:2Model Size:32BQuant:FP8Ctx Length:32kPublished:Apr 8, 2025License:mitArchitecture:Transformer0.2K Open Weights Warm

The GLM-Z1-32B-0414 is a 32 billion parameter reasoning model from the GLM family, developed by Team GLM. It is specifically designed for deep thinking capabilities, with enhanced performance in mathematics, code, and logic tasks through extended reinforcement learning. This model excels at solving complex problems and is suitable for local deployment.

Loading preview...

GLM-Z1-32B-0414: A Deep Thinking Reasoning Model

GLM-Z1-32B-0414 is a 32 billion parameter model from the GLM family, developed by Team GLM. It builds upon the GLM-4-32B-0414 base model, which was pre-trained on 15T of high-quality data, including extensive reasoning-type synthetic data. This model has undergone further training with cold start and extended reinforcement learning, specifically targeting mathematics, code, and logic tasks.

Key Capabilities

  • Deep Thinking: Designed for complex problem-solving with enhanced reasoning abilities.
  • Mathematical Proficiency: Significantly improved performance in mathematical tasks.
  • Code and Logic: Stronger capabilities in handling engineering code and logical problems.
  • Reinforcement Learning: Utilizes general reinforcement learning based on pairwise ranking feedback to boost overall performance.
  • Local Deployment: Supports user-friendly local deployment.

Good For

  • Complex Task Solving: Ideal for scenarios requiring deep analytical thought.
  • Mathematical Reasoning: Applications needing robust mathematical problem-solving.
  • Code Generation & Analysis: Tasks involving engineering code.
  • Agent Tasks: Strengthening atomic capabilities required for agent-based applications.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p