unsloth/GLM-Z1-32B-0414
TEXT GENERATIONConcurrency Cost:2Model Size:32BQuant:FP8Ctx Length:32kLicense:mitArchitecture:Transformer0.0K Open Weights Cold

unsloth/GLM-Z1-32B-0414 is a 32 billion parameter model from the GLM-4 series, developed by THUDM. This model is specifically designed as a reasoning model with deep thinking capabilities, built upon the GLM-4-32B-0414 base through cold start and extended reinforcement learning. It excels in tasks involving mathematics, code, and logic, significantly improving mathematical abilities and complex problem-solving compared to its base model.

Loading preview...

GLM-Z1-32B-0414: A Deep Reasoning Model

GLM-Z1-32B-0414 is a 32 billion parameter model from the GLM-4 series, developed by THUDM. It is distinguished as a reasoning model with deep thinking capabilities, built upon the GLM-4-32B-0414 base through cold start and extended reinforcement learning. The model has undergone further training on tasks involving mathematics, code, and logic, leading to significant improvements in these areas and in solving complex problems.

Key Capabilities

  • Enhanced Mathematical Abilities: Demonstrates strong performance in mathematical reasoning.
  • Complex Task Solving: Excels at tackling intricate problems requiring deep thought.
  • Instruction Following: Improved ability to adhere to given instructions.
  • Engineering Code: Strong performance in code-related tasks.
  • Function Calling: Enhanced capabilities for function calling scenarios.
  • Agent Task Atomic Capabilities: Strengthened foundational skills for agent-based applications.

Usage Guidelines

To optimize performance, users are advised to:

  • Set temperature to 0.6 and top_p to 0.95 for balanced output.
  • Enforce thinking by adding <think>\n to the first line of prompts, which is automatically handled by chat_template.jinja.
  • Trim dialogue history to only the final user-visible reply, excluding hidden thinking content.
  • Consider enabling YaRN (Rope Scaling) for inputs exceeding 8,192 tokens, by adding specific rope_scaling configuration to config.json.