ermiaazarkhalili/Qwen2.5-14B-Instruct_Function_Calling_xLAM
TEXT GENERATIONConcurrency Cost:1Model Size:14.8BQuant:FP8Ctx Length:32kPublished:Aug 1, 2025License:apache-2.0Architecture:Transformer Open Weights Cold
The ermiaazarkhalili/Qwen2.5-14B-Instruct_Function_Calling_xLAM model is a 14.8 billion parameter language model, fine-tuned from Qwen/Qwen2.5-14B-Instruct. It is specifically optimized for function calling tasks, having been trained on the Salesforce/xlam-function-calling-60k dataset using Supervised Fine-Tuning (SFT) with LoRA adapters. This model is designed for applications requiring robust tool use and function invocation capabilities, with a fine-tuning context length of 2,048 tokens.
Loading preview...