Salesforce/xLAM-2-32b-fc-r

Warm
Public
32B
FP8
32768
License: cc-by-nc-4.0
Hugging Face
Overview

xLAM-2-32b-fc-r: A Large Action Model for AI Agents

The Salesforce/xLAM-2-32b-fc-r is a 32 billion parameter model from the xLAM-2 series, specifically designed as a Large Action Model (LAM). LAMs function as the "brains of AI agents," enabling them to autonomously plan and execute tasks based on user intentions. This model is a research release, built on advanced data synthesis and training pipelines.

Key Capabilities & Features

  • Multi-turn Conversation: Significantly enhanced capabilities for engaging in complex, multi-turn dialogues.
  • Advanced Tool Usage & Function Calling: Optimized for function-calling tasks, indicated by the -fc suffix, allowing agents to interact with external tools and APIs. It supports seamless integration with Hugging Face chat templates and vLLM for efficient inference.
  • State-of-the-Art Performance: Achieves leading results on the BFCL and Ļ„-bench benchmarks, outperforming several frontier models in agentic capabilities and consistency.
  • APIGen-MT Training: Trained using the novel APIGen-MT framework, which generates high-quality data through simulated agent-human interactions, ensuring robust performance in real-world scenarios.
  • Context Length: Features a default context length of 32k tokens, with potential for up to 128k using techniques like YaRN for Qwen-2.5-based models.

Ideal Use Cases

  • AI Agent Development: Building sophisticated AI agents that require advanced decision-making, planning, and execution capabilities.
  • Automated Workflows: Automating complex tasks across diverse domains by translating user intentions into actionable steps.
  • Research in Agentic AI: Exploring and developing new paradigms in agent-human interaction and autonomous systems.