Overview
GLM-4.7 Series are Z.AI’s latest flagship models, featuring upgrades in two key areas: enhanced programming capabilities and more stable multi-step reasoning/execution. It demonstrates significant improvements in executing complex agent tasks while delivering more natural conversational experiences and superior front-end aesthetics.- GLM-4.7
- GLM-4.7-FlashX
- GLM-4.7-Flash
Positioning
Flagship, Highest Performance
Input Modalities
Text
Output Modalitie
Text
Context Length
200K
Maximum Output Tokens
128K
Capability
Thinking Mode
Offering multiple thinking modes for different scenarios
Streaming Output
Support real-time streaming responses to enhance user interaction experience
Function Call
Powerful tool invocation capabilities, enabling integration with various external toolsets
Context Caching
Intelligent caching mechanism to optimize performance in long conversations
Structured Output
Support for structured output formats like JSON, facilitating system integration
Usage
Agentic Coding
Agentic Coding
GLM-4.7 focuses on “task completion” rather than single-point code generation. It autonomously accomplishes requirement comprehension, solution decomposition, and multi-technology stack integration starting from target descriptions. In complex scenarios involving frontend-backend coordination, real-time interaction, and peripheral device calls, it directly generates structurally complete, executable code frameworks. This significantly reduces manual assembly and iterative debugging costs, making it ideal for complex demos, prototype validation, and automated development workflows.
Multimodal Interaction and Real-Time Application Development
Multimodal Interaction and Real-Time Application Development
In scenarios requiring cameras, real-time input, and interactive controls, GLM-4.7 demonstrates superior system-level comprehension. It integrates visual recognition, logic control, and application code into unified solutions, enabling rapid construction of interactive applications like gesture control and real-time feedback. This accelerates the journey from concept to operational application.
Web UI Generation and Visual Aesthetic Optimization
Web UI Generation and Visual Aesthetic Optimization
Significantly enhanced understanding of visual code and UI specifications. GLM-4.7 provides more aesthetically pleasing and consistent default solutions for layout structures, color harmony, and component styling, reducing time spent on repetitive “fine-tuning” of styles. It is well-suited for low-code platforms, AI frontend generation tools, and rapid prototyping scenarios.
High-Quality Dialogue and Complex Problem Collaboration
High-Quality Dialogue and Complex Problem Collaboration
Maintains context and constraints more reliably during multi-turn conversations. Responds more directly to simple queries while continuously clarifying objectives and advancing resolution paths for complex issues. GLM-4.7 functions as a collaborative “problem-solving partner,” ideal for high-frequency collaboration scenarios like development support, solution discussions, and decision-making assistance.
Immersive Writing & Character-Driven Creation
Immersive Writing & Character-Driven Creation
Delivers more nuanced, vividly descriptive prose that builds atmosphere through sensory details like scent, sound, and light. In role-playing and narrative creation, it maintains consistent adherence to world-building and character archetypes, advancing plots with natural tension. Ideal for interactive storytelling, IP content creation, and character-based applications.
Professional-Grade PPT/Poster Generation
Professional-Grade PPT/Poster Generation
In office creation, GLM-4.7 demonstrates significantly enhanced layout consistency and aesthetic stability. It reliably adapts to mainstream aspect ratios like 16:9, minimizes template-like elements in typography hierarchy, white space, and color schemes, and produces near-ready-to-use results. This makes it ideal for AI presentation tools, enterprise office systems, and automated content generation scenarios.
Intelligent Search and Deep Research
Intelligent Search and Deep Research
Enhanced capabilities in user intent understanding, information retrieval, and result integration. For complex queries and research tasks, GLM-4.7 not only returns information but also performs structured organization and cross-source consolidation. Through multi-round interactions, it progressively narrows in on core conclusions, making it suitable for in-depth research and decision-support scenarios.
Introducing GLM-4.7
1
Comprehensive Coding Capability Enhancement
GLM-4.7 achieves significant breakthroughs across three dimensions: programming, reasoning, and agent capabilities:

- Enhanced Programming Capabilities: Substantially improves model performance in multi-language coding and terminal agent applications; GLM-4.7 now implements a “think before acting” mechanism within programming frameworks like Claude Code, Kilo Code, TRAE, Cline, and Roo Code, delivering more stable performance on complex tasks.
- Enhanced Frontend Aesthetics: GLM-4.7 shows marked progress in frontend generation quality, producing visually superior webpages, PPTs, and posters.
- Enhanced Tool Invocation Capabilities: GLM-4.7 demonstrates improved tool invocation skills, scoring 67 points on the BrowseComp web task evaluation and achieving an open-source SOTA of 84.7 points on the τ²-Bench interactive tool invocation benchmark, surpassing Claude Sonnet 4.5
- Enhanced reasoning capabilities: Significantly improved mathematical and reasoning skills, achieving 42.8% on the HLE (“Human Last Exam”) benchmark—a 41% increase over GLM-4.6 and surpassing GPT-5.1
- Enhanced General Capabilities: GLM-4.7 delivers more concise, intelligent, and empathetic conversations, with more eloquent and immersive writing and role-playing
Code Arena: A professional coding evaluation system with millions of global users participating in blind tests. GLM-4.7 ranks first among open-source models and domestic models, outperforming GPT-5.2In mainstream benchmark performance, GLM-4.7’s coding capabilities align with Claude Sonnet 4.5: Achieved top open-source ranking on SWE-bench-Verified; Reached an open-source SOTA score of 84.9 on LiveCodeBench V6, surpassing Claude Sonnet 4.5; Achieved 73.8% on SWE-bench Verified (a 5.8% improvement over GLM-4.6), 66.7% on SWE-bench Multilingual (a 12.9% improvement), and 41% on Terminal Bench 2.0 (a 16.5% improvement).
2
GLM-4.7-Flash: Small but Powerful
In mainstream benchmarks like SWE-bench Verified and τ²-Bench, GLM-4.7-Flash achieves open-source SOTA scores among models of comparable size. Additionally, compared to similarly sized models, GLM-4.7-Flash demonstrates superior frontend and backend development capabilities.In internal programming tests, GLM-4.7-Flash excels at both frontend and backend tasks. Beyond programming scenarios, we also recommend experiencing GLM-4.7-Flash in general-purpose applications such as Chinese writing, translation, long-form text processing, and emotional/role-playing interactions.

3
Perceived Improvement in Real Programming Scenarios
- Performance on Real Programming Tasks
- Controlled Evolution of Reasoning Capabilities
- Comprehensive Task Execution Capabilities
- Frontend Aesthetic Enhancement
In the Claude Code environment, we tested 100 real programming tasks covering core capabilities like frontend, backend, and instruction following. Results show GLM-4.7 demonstrates significant improvements over GLM-4.6 in both stability and deliverability.
With enhanced programming capabilities, developers can more naturally organize their development workflow around “task delivery,” forming an end-to-end closed loop from requirement understanding to implementation.
With enhanced programming capabilities, developers can more naturally organize their development workflow around “task delivery,” forming an end-to-end closed loop from requirement understanding to implementation.Resources
- API Documentation: Learn how to call the API.
Quick Start
The following is a full sample code to help you onboard GLM-4.7 with ease.- cURL
- Official Python SDK
- Official Java SDK
- OpenAI Python SDK
Basic CallStreaming Call