yuvraaj23/citynexus-planner-qwen2.5-0.5b

TEXT GENERATIONConcurrency Cost:1Model Size:0.5BQuant:BF16Ctx Length:32kPublished:Apr 26, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The yuvraaj23/citynexus-planner-qwen2.5-0.5b is a 0.5 billion parameter Qwen2.5-based instruction-tuned causal language model developed by yuvraaj23. It was fine-tuned from unsloth/qwen2.5-0.5b-instruct-unsloth-bnb-4bit using Unsloth and Huggingface's TRL library, enabling 2x faster training. This model is optimized for tasks requiring efficient processing within a 32768 token context length.

Loading preview...

Model Overview

The yuvraaj23/citynexus-planner-qwen2.5-0.5b is a compact 0.5 billion parameter language model, fine-tuned by yuvraaj23. It is based on the Qwen2.5 architecture and was specifically instruction-tuned from the unsloth/qwen2.5-0.5b-instruct-unsloth-bnb-4bit base model.

Key Characteristics

  • Architecture: Qwen2.5-based, a causal language model.
  • Parameter Count: 0.5 billion parameters, making it suitable for resource-constrained environments or applications requiring faster inference.
  • Context Length: Supports a substantial context window of 32768 tokens, allowing it to process longer inputs and maintain conversational coherence over extended interactions.
  • Training Efficiency: The model was fine-tuned using Unsloth and Huggingface's TRL library, which facilitated a 2x faster training process compared to standard methods.

Use Cases

This model is well-suited for applications where a balance between performance and computational efficiency is crucial. Its instruction-tuned nature and large context window make it effective for:

  • Text generation: Creating coherent and contextually relevant text.
  • Instruction following: Responding to specific prompts and commands.
  • Summarization: Condensing longer texts while retaining key information.
  • Conversational AI: Engaging in extended dialogues due to its 32768 token context length.