gorilla-llm/gorilla-openfunctions-v1

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Nov 16, 2023License:apache-2.0Architecture:Transformer0.1K Open Weights Cold

gorilla-llm/gorilla-openfunctions-v1 is a 7 billion parameter language model developed by gorilla-llm, specifically designed to extend LLM Chat Completion features for generating executable API calls. This model excels at formulating properly formatted JSON with the correct arguments from natural language instructions and API context, including support for parallel function calls. It is optimized for function calling scenarios, enabling seamless integration with external tools and services.

Loading preview...

Overview

gorilla-llm/gorilla-openfunctions-v1 is a 7 billion parameter model developed by gorilla-llm, specializing in converting natural language instructions into executable API calls. It enhances the standard LLM Chat Completion feature by enabling the model to understand user intent and an API context to formulate precise function calls.

Key Capabilities

  • Function Calling: Translates natural language queries into structured API calls with appropriate arguments.
  • Parallel Function Support: Unlike its predecessor (v0), v1 can handle and choose between multiple functions, allowing for more complex interactions.
  • OpenAI Functions Compatibility: Designed to be compatible with OpenAI's Functions framework, simplifying integration for developers.
  • Local and Hosted Deployment: Can be run locally using Hugging Face Transformers or accessed via gorilla-llm's hosted servers.

Use Cases

This model is particularly well-suited for applications requiring robust tool-use capabilities, such as:

  • Automated Task Execution: Generating API calls to perform actions based on user commands (e.g., "Call me an Uber").
  • Chatbots and Virtual Assistants: Enabling conversational agents to interact with external services and APIs.
  • Workflow Automation: Orchestrating complex processes by dynamically calling functions based on user input.

Training and Licensing

The models and data used for training are released under the Apache 2.0 license. The project is an open-source effort from UC Berkeley, encouraging community contributions.