rizerphe/CodeLlama-function-calling-6320-7b-Instruct-hf

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kLicense:llama2Architecture:Transformer0.0K Open Weights Cold

The rizerphe/CodeLlama-function-calling-6320-7b-Instruct-hf model is a 7 billion parameter CodeLlama-based instruction-tuned language model developed by rizerphe. It is specifically fine-tuned for function calling capabilities, leveraging a combination of the glaive-function-calling-v2 dataset and sharegpt-hyperfiltered-3k. This model excels at generating function calls based on user prompts and can also handle general chat interactions, making it suitable for applications requiring structured tool use alongside conversational AI.

Loading preview...

Model Overview

rizerphe/CodeLlama-function-calling-6320-7b-Instruct-hf is a 7 billion parameter instruction-tuned model built upon the CodeLlama architecture. It has been fine-tuned using LoRA on a curated dataset comprising a fraction of the glaive-function-calling-v2 dataset for function calling examples and a cleaned version of sharegpt-hyperfiltered-3k for general chat. This specialized training enables the model to effectively interpret user requests and generate appropriate function calls, while also maintaining conversational abilities.

Key Capabilities

  • Function Calling: The model is specifically optimized to understand user intent and generate structured function calls, including parameters, based on available function definitions. It can correctly identify when a function call is needed and format it appropriately.
  • Instruction Following: It demonstrates strong instruction-following capabilities, responding to direct questions and engaging in multi-turn conversations.
  • Contextual Understanding: The model can maintain context across turns in a conversation, even when functions are involved, allowing for more natural and effective interactions.
  • Hybrid Use Cases: It seamlessly transitions between generating function calls and providing natural language responses, depending on the prompt and available tools.

Good For

  • Tool-augmented LLM applications: Ideal for developers building agents or applications that require an LLM to interact with external tools or APIs through function calls.
  • Automated task execution: Suitable for scenarios where user requests need to be translated into executable actions via predefined functions.
  • Conversational AI with structured outputs: Useful for chatbots or virtual assistants that need to both converse naturally and trigger specific actions or retrieve structured information.