Prithwiraj731/SupplyChain-Qwen

TEXT GENERATIONConcurrency Cost:1Model Size:0.5BQuant:BF16Ctx Length:32kPublished:Apr 7, 2026Architecture:Transformer Cold

Prithwiraj731/SupplyChain-Qwen is a 0.5 billion parameter instruction-tuned Qwen2.5 model, converted to GGUF format. This model was fine-tuned using Unsloth, indicating optimizations for faster training and deployment. It is designed for text-only applications and can be deployed with tools like llama-cli or Ollama, making it suitable for supply chain-related natural language processing tasks.

Loading preview...

SupplyChain-Qwen: GGUF Model

This model, developed by Prithwiraj731, is a 0.5 billion parameter Qwen2.5 variant, specifically fine-tuned and converted into the GGUF format. The fine-tuning process leveraged Unsloth, a framework known for accelerating model training, which suggests this model benefits from optimized performance characteristics.

Key Capabilities

  • Optimized Training: Fine-tuned with Unsloth, indicating potential for faster training and efficient resource utilization.
  • GGUF Format: Provided in GGUF format, making it compatible with various inference engines like llama-cli and Ollama for local deployment.
  • Instruction-Tuned: Designed to follow instructions, suitable for a range of natural language processing tasks.
  • Lightweight: With 0.5 billion parameters, it offers a smaller footprint for deployment compared to larger models.

Deployment and Usage

An Ollama Modelfile is included for straightforward deployment, simplifying the process for users. It supports both text-only LLM interfaces via llama-cli and potentially multimodal models via llama-mtmd-cli, though the provided file is for a text-only instruct model. The model is packaged as qwen2.5-0.5b-instruct.F16.gguf.