Nexusflow/NexusRaven-V2-13B

TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kPublished:Dec 4, 2023License:otherArchitecture:Transformer0.5K Cold

Nexusflow/NexusRaven-V2-13B is a 13 billion parameter open-source language model developed by Nexusflow, specifically optimized for zero-shot function calling. It excels at generating single, nested, and parallel function calls, surpassing GPT-4 in success rates for complex human-generated use cases. The model is designed for commercial viability, trained without proprietary LLM data, and can also provide detailed explanations for its generated function calls.

Loading preview...

NexusRaven-V2-13B: Advanced Function Calling LLM

NexusRaven-V2-13B, developed by Nexusflow, is a 13 billion parameter open-source model engineered to excel in zero-shot function calling. It demonstrates superior performance, outperforming GPT-4 by 7% in function calling success rates on human-generated use cases involving nested and composite functions. A key feature is its ability to generalize to unseen functions, as it has not been trained on the specific functions used for evaluation.

Key Capabilities

  • Versatile Function Calling: Generates single, nested, and parallel function calls, even in challenging scenarios.
  • Explainable AI: Can produce detailed explanations for its generated function calls, a feature that can be toggled off to save inference tokens.
  • Commercially Permissive: Trained exclusively on commercially viable data, ensuring full control for deployment in commercial applications without proprietary LLM data dependencies.
  • Flexible Prompting: Supports custom Python function signatures and docstrings, with recommendations for optimal performance including providing arguments for all functions and using low temperature sampling.

Good For

  • Developers requiring robust and accurate function calling capabilities for integrating LLMs with external tools and APIs.
  • Applications needing explainable function calls for transparency and debugging.
  • Commercial projects seeking an open-source, high-performance function calling model without licensing restrictions from proprietary LLMs.
  • Use cases involving complex function interactions, such as deeply nested or parallel API calls.