aiseosae/Affine-5HSp1dWtGppxvnsRvDYsWMwWMihzZbftwUU12LGAfwhnECdp

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jan 12, 2026Architecture:Transformer Cold

AgentEvol-7B is a 7 billion parameter language model developed by Zhiheng Xi et al. (AgentGym) based on Llama-2-Chat-7B architecture. It is trained using the AgentEvol algorithm, which involves initial behavioral cloning on the AgentTraj dataset followed by exploration and learning from a broader set of instructions. This model is specifically designed to evolve generally-capable LLM-based agents across multiple environments, outperforming state-of-the-art models on various agent tasks after its evolutionary training process.

Loading preview...

AgentEvol-7B Overview

AgentEvol-7B is a 7 billion parameter model built upon Llama-2-Chat-7B, developed by Zhiheng Xi et al. as part of the AgentGym project. This model introduces the AgentEvol algorithm, a novel method for evolving generally-capable LLM-based agents across diverse environments.

Key Capabilities & Training

  • Agent Evolution: Utilizes a two-stage training process: initial behavioral cloning on the AgentTraj dataset to establish foundational abilities, followed by extensive exploration and learning from a wider range of instructions and environments.
  • Enhanced Agent Performance: The evolutionary training allows AgentEvol-7B to significantly outperform existing state-of-the-art models on numerous agent-based tasks.
  • General-Purpose Agent: Aims to create agents with broad capabilities suitable for various interactive environments.

Resources

Use Cases

AgentEvol-7B is particularly well-suited for applications requiring intelligent agents that can learn, adapt, and perform effectively across different interactive environments and tasks, making it valuable for research and development in agent-based AI systems.