MirrorAPI Overview
MirrorAPI is a 7.6 billion parameter language model developed by stabletoolbench, fine-tuned from the Qwen2.5-7B-Instruct architecture. Its core purpose is to simulate an API server, generating accurate and structured JSON responses based on API documentation and specific input requests. This model is particularly adept at understanding API functionality and producing meaningful outputs, even when dealing with potentially incorrect input parameters.
Key Capabilities
- API Response Generation: Generates JSON-formatted responses that align with an API's expected output, given its documentation and input.
- SFT and CoT Modes: Supports two operational modes:
- SFT (Supervised Fine-Tuning) Mode: Provides direct, structured JSON responses for API calls.
- CoT (Chain-of-Thought) Mode: Prepend
[CHAIN_OF_THOUGHT] to the system prompt to enable the model to infer the underlying mechanism of the API before generating a response, offering deeper insights into its functionality.
- Robust Input Handling: Designed to generate informative and relevant responses even when API input parameters are incorrect, explaining expected behavior.
- Structured Output: Ensures all responses are valid and parsable JSON objects, adhering to a predefined schema including
error and response fields (and mechanism_of_the_api in CoT mode).
Training and Usage
MirrorAPI was trained on custom datasets including train_sft.json, train_cot.json, and train_augment.json. Testing data is available for both SFT and CoT modes. Users can integrate and test MirrorAPI using LLaMA-Factory, with specific instructions provided for data preparation and quickstart scripts. The model requires distinct system prompts to activate SFT or CoT behavior, along with a user prompt containing API documentation and request details in JSON format.