xushuwen23/GraphWalker-7B

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Mar 30, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

GraphWalker-7B is a 7.6 billion parameter large language model developed by Shuwen Xu and team, fine-tuned from Qwen2.5-7B-Instruct with a 32K context length. It specializes in Agentic Knowledge Graph Question Answering (KGQA) by learning to navigate knowledge graphs through a synthetic trajectory curriculum. This model transforms LLMs into reasoning agents for multi-turn KGQA over global knowledge graphs like Freebase, achieving strong generalization.

Loading preview...

GraphWalker-7B: Agentic KGQA Model

GraphWalker-7B is a 7.6 billion parameter large language model, fine-tuned from Qwen2.5-7B-Instruct, specifically designed for Agentic Knowledge Graph Question Answering (KGQA). It leverages a 32,768 token context window to enable sophisticated reasoning over complex knowledge graphs.

Key Capabilities

  • Agentic KGQA: Transforms LLMs into reasoning agents capable of autonomously navigating massive knowledge graphs (e.g., Freebase) through a "Think-Query-Observe" loop.
  • Synthetic Trajectory Curriculum: Optimized via a unique synthetic curriculum, allowing for strong generalization in KGQA tasks.
  • Multi-turn Question Answering: Excels at handling multi-turn interactions within knowledge graph environments.
  • Performance: Achieves significant improvements over vanilla agents on benchmarks like CWQ and WebQSP. For instance, GraphWalker-7B-SFT-RL (based on Qwen2.5-7B-Instruct) scored 79.6 EM and 74.2 F1 on CWQ, and 91.5 EM and 88.6 F1 on WebQSP, outperforming other GraphWalker variants and vanilla agents using models like GPT-4o-mini and DeepSeek-V3.2.

Good For

  • Developing intelligent agents for querying and reasoning over large-scale knowledge graphs.
  • Applications requiring precise, multi-turn question answering from structured data.
  • Research and development in agentic AI and knowledge graph interaction.