aki-008/Zindi_RAC-Qwen2.5-1.5B-Instruct-Think-16-bit

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Dec 25, 2025License:apache-2.0Architecture:Transformer Open Weights Warm

The aki-008/Zindi_RAC-Qwen2.5-1.5B-Instruct-Think-16-bit is a 1.5 billion parameter instruction-tuned causal language model developed by aki-008, finetuned from unsloth/Qwen2.5-1.5B-Instruct. This model was trained with Unsloth and Huggingface's TRL library, enabling 2x faster fine-tuning. It features a substantial 131,072 token context length, making it suitable for tasks requiring extensive contextual understanding.

Loading preview...

Model Overview

aki-008/Zindi_RAC-Qwen2.5-1.5B-Instruct-Think-16-bit is a 1.5 billion parameter instruction-tuned language model developed by aki-008. It is finetuned from the unsloth/Qwen2.5-1.5B-Instruct base model, leveraging the Unsloth library and Huggingface's TRL for efficient training.

Key Characteristics

  • Base Model: Finetuned from Qwen2.5-1.5B-Instruct.
  • Efficient Training: Utilizes Unsloth for a 2x speedup in fine-tuning, making it a cost-effective and time-efficient option for deployment.
  • Context Length: Features a significant 131,072 token context window, allowing it to process and understand very long inputs.
  • License: Distributed under the Apache-2.0 license, providing flexibility for various applications.

Potential Use Cases

This model is well-suited for applications requiring:

  • Long-context understanding: Its large context window makes it ideal for summarizing extensive documents, analyzing lengthy conversations, or processing large codebases.
  • Instruction-following tasks: As an instruction-tuned model, it can effectively respond to a wide range of prompts and commands.
  • Resource-efficient deployment: The 1.5 billion parameter size, combined with efficient training methods, suggests it could be deployed in environments with moderate computational resources.