MirrorAPI-Cache: API Simulation and Response Generation
stabletoolbench/MirrorAPI-Cache is a 7.6 billion parameter model, fine-tuned from the StableToolBench-MirrorAPI base, to act as an intelligent API server. Its core capability lies in interpreting API documentation and generating precise, JSON-formatted API responses based on specific input requests. This model is trained on dedicated datasets, train_cache.json and test_cache.json, focusing on API interaction patterns.
Key Capabilities
- API Response Generation: Accurately crafts JSON responses that align with an API's intended functionality, even with varied input parameters.
- API Mechanism Inference: Supports a Chain of Thought (CoT) mode, allowing the model to infer the underlying mechanism of an API before generating a response, providing deeper insights.
- Flexible Prompting: Utilizes distinct system prompts for standard Supervised Fine-Tuning (SFT) mode and CoT mode, guiding the model's behavior.
- Structured Output: Ensures all responses adhere to a strict JSON schema, including fields for error and response content, and an additional
mechanism_of_the_api field in CoT mode.
Good For
- API Simulation: Ideal for developers needing to simulate API behavior for testing, development, or prototyping without a live backend.
- Automated API Documentation Testing: Can be used to validate API documentation by generating expected outputs.
- Tool-use and Agent Development: Provides a robust component for agents that need to interact with or understand various APIs.