ermiaazarkhalili/Qwen2.5-14B-Instruct_Function_Calling_xLAM

TEXT GENERATIONConcurrency Cost:1Model Size:14.8BQuant:FP8Ctx Length:32kPublished:Aug 1, 2025License:apache-2.0Architecture:Transformer Open Weights Cold

The ermiaazarkhalili/Qwen2.5-14B-Instruct_Function_Calling_xLAM model is a 14.8 billion parameter language model, fine-tuned from Qwen/Qwen2.5-14B-Instruct. It is specifically optimized for function calling tasks, having been trained on the Salesforce/xlam-function-calling-60k dataset using Supervised Fine-Tuning (SFT) with LoRA adapters. This model is designed for applications requiring robust tool use and function invocation capabilities, with a fine-tuning context length of 2,048 tokens.

Loading preview...

Overview

This model, ermiaazarkhalili/Qwen2.5-14B-Instruct_Function_Calling_xLAM, is a 14.8 billion parameter language model derived from the Qwen2.5-14B-Instruct base model. It has been specifically fine-tuned for function calling capabilities using Supervised Fine-Tuning (SFT) with LoRA adapters on the Salesforce/xlam-function-calling-60k dataset.

Key Capabilities

  • Function Calling Optimization: Specialized training for understanding and generating function calls.
  • Efficient Fine-Tuning: Utilizes LoRA (Low-Rank Adaptation) with 4-bit quantization for efficient training.
  • Base Model: Built upon the robust Qwen2.5-14B-Instruct architecture.
  • Inference Flexibility: Available in multiple formats, including GGUF quantizations for CPU/mixed CPU/GPU inference.

Good For

  • Research: Exploring language model fine-tuning techniques, especially for function calling.
  • Prototyping: Developing conversational AI agents that require tool use or function invocation.
  • Educational Purposes: Learning about SFT, LoRA, and function-calling model development.
  • Personal Projects: Implementing AI assistants with specific action capabilities.