gorilla-llm/gorilla-7b-tf-delta-v0
Gorilla-7b-tf-delta-v0 is a 7 billion parameter auto-regressive language model developed by Gorilla LLM (UC Berkeley), fine-tuned from LLaMA weights. This model specializes in enabling Large Language Models to use tools by accurately invoking over 1,600 APIs, specifically demonstrating reliable use of TensorFlow Hub APIs. It is designed to write semantically and syntactically correct API calls from natural language queries, significantly reducing hallucination in API invocation.
Loading preview...
Overview
Gorilla-7b-tf-delta-v0 is a 7 billion parameter auto-regressive language model, fine-tuned from LLaMA, developed by Gorilla LLM (UC Berkeley). Its core innovation lies in its ability to connect Large Language Models (LLMs) with a vast array of APIs, allowing them to act as tool users. The model can translate natural language queries into semantically and syntactically correct API calls, addressing a common challenge of hallucination in API invocation.
Key Capabilities
- API Invocation: Proficiently invokes over 1,600 APIs, with a specific focus on TensorFlow Hub APIs.
- Reduced Hallucination: Designed to minimize errors and hallucinations when generating API calls.
- Natural Language to API: Converts natural language instructions into executable API code.
- Retriever-Aware Training: Can be trained using a novel retriever-aware pipeline, alongside standard fine-tuning.
Good For
- Tool Use in LLMs: Ideal for applications requiring LLMs to interact with external tools and services via APIs.
- Code Generation (APIs): Generating accurate API calls based on user prompts.
- Expanding LLM Functionality: Enabling LLMs to perform tasks beyond their inherent knowledge by leveraging external APIs. The project also maintains APIBench, a large collection of APIs for training.