notshakti/wraith-boss-ai

TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Apr 26, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The notshakti/wraith-boss-ai is a 1.5 billion parameter Qwen2-based instruction-tuned causal language model developed by notshakti. Fine-tuned from unsloth/Qwen2.5-1.5B-Instruct-bnb-4bit, this model was trained using Unsloth for accelerated performance. It is designed for general instruction-following tasks, leveraging its efficient training methodology.

Loading preview...

notshakti/wraith-boss-ai: An Efficient Qwen2 Instruction Model

Overview

The notshakti/wraith-boss-ai is a 1.5 billion parameter instruction-tuned language model, developed by notshakti. It is built upon the Qwen2 architecture and was specifically fine-tuned from the unsloth/Qwen2.5-1.5B-Instruct-bnb-4bit base model. A key highlight of this model's development is its training process, which utilized Unsloth to achieve a 2x speed improvement during finetuning.

Key Capabilities

  • Instruction Following: Designed to accurately respond to a wide range of user instructions.
  • Efficient Training: Benefits from Unsloth's optimization, making it a potentially resource-friendly option for deployment.
  • Qwen2 Architecture: Leverages the robust and performant Qwen2 base for strong language understanding and generation.

Should I use this for my use case?

This model is suitable for general-purpose instruction-following tasks where a smaller, efficiently trained model is preferred. Its 1.5 billion parameters and 32768 token context length make it a good candidate for applications requiring reasonable performance without the computational overhead of larger models. Consider this model if you need a capable instruction-tuned LLM that prioritizes efficient training and deployment, especially for tasks where the Qwen2 architecture has shown strong performance.