yemmygold/Qwen2.5-3B-Instruct_Function_Calling_xLAM

TEXT GENERATIONConcurrency Cost:1Model Size:3.1BQuant:BF16Ctx Length:32kPublished:Apr 24, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

yemmygold/Qwen2.5-3B-Instruct_Function_Calling_xLAM is a 3.1 billion parameter instruction-tuned causal language model, fine-tuned from Qwen/Qwen2.5-3B-Instruct by ermiaazarkhalili. Optimized using Supervised Fine-Tuning (SFT) with LoRA adapters on the Salesforce/xlam-function-calling-60k dataset, this model specializes in function calling tasks. It is designed for efficient inference and research into fine-tuning language models for specific instruction-following capabilities.

Loading preview...

Overview

yemmygold/Qwen2.5-3B-Instruct_Function_Calling_xLAM is a 3.1 billion parameter language model developed by ermiaazarkhalili, fine-tuned from the Qwen/Qwen2.5-3B-Instruct base model. It leverages Supervised Fine-Tuning (SFT) with LoRA (Low-Rank Adaptation) using 4-bit quantization to specialize in function calling tasks.

Key Capabilities

  • Function Calling Optimization: Specifically trained on the Salesforce/xlam-function-calling-60k dataset to enhance its ability to understand and generate function calls.
  • Efficient Fine-Tuning: Utilizes LoRA with 4-bit NF4 quantization, making the training process efficient and resource-friendly.
  • Instruction Following: Benefits from SFT to improve adherence to instructions, particularly for structured outputs related to function calling.
  • Flexible Deployment: Available in multiple formats, including GGUF quantizations, for CPU or mixed CPU/GPU inference, and supports 4-bit quantized inference for reduced memory footprint.

Good For

  • Research: Ideal for studying language model fine-tuning techniques and their impact on specific tasks like function calling.
  • Educational Purposes: Suitable for learning about SFT, LoRA, and function calling implementations in LLMs.
  • Prototyping: Useful for developing and testing conversational AI agents that require robust function calling capabilities.
  • Personal Projects: Can be integrated into personal applications where function calling is a core requirement.