Varadrajan/ITDR-SFT-Qwen2.5-3B-v1
TEXT GENERATIONConcurrency Cost:1Model Size:3.1BQuant:BF16Ctx Length:32kPublished:Feb 28, 2026Architecture:Transformer0.0K Warm

Varadrajan/ITDR-SFT-Qwen2.5-3B-v1 is a 3.1 billion parameter instruction-tuned causal language model based on the Qwen2.5 architecture. This model has been fine-tuned and converted to the GGUF format, making it suitable for efficient local deployment and inference. It is specifically optimized for use with llama.cpp and Ollama, providing readily available model files for various quantization levels. Its primary differentiator is its fine-tuning process, which leveraged Unsloth for accelerated training, and its direct availability in GGUF for easy integration into local inference setups.

Loading preview...