AgentGym/AgentEvol-7B
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jun 6, 2024Architecture:Transformer0.0K Cold
AgentGym's AgentEvol-7B is a 7 billion parameter language model based on Llama-2-Chat-7B, developed using the novel AgentEvol algorithm. This model is specifically designed to create generally-capable LLM-based agents that can evolve across multiple environments. It excels in agentic tasks by combining behavioral cloning with exploration and learning from diverse instructions, outperforming state-of-the-art models in various agent benchmarks.
Loading preview...
AgentEvol-7B: An Evolving Agentic LLM
AgentEvol-7B is a 7 billion parameter model built upon Llama-2-Chat-7B, developed by AgentGym using the innovative AgentEvol algorithm. This method focuses on creating generally-capable LLM-based agents that can adapt and learn across diverse environments.
Key Capabilities & Training
- Evolutionary Learning: The model is initially trained with behavioral cloning on the AgentTraj dataset, providing foundational abilities and prior knowledge.
- Exploration and Adaptation: Following initial training, AgentEvol-7B engages in exploration and learning from a broader set of instructions across various tasks and environments.
- Enhanced Performance: This evolutionary process allows the agent to significantly improve its performance, surpassing existing state-of-the-art models on numerous agentic tasks.
Resources
- Project Page: https://agentgym.github.io/
- Trajectory Dataset: https://huggingface.co/datasets/AgentGym/AgentTraj-L
- Evaluation Benchmark: https://huggingface.co/datasets/AgentGym/AgentEval
Good For
- Developing and researching LLM-based agents.
- Tasks requiring agents to learn and adapt in new environments.
- Benchmarking agent performance against complex, multi-environment challenges.