acstener/BlazingCleanup-Qwen2.5-1.5B-FT-v1

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Mar 24, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

acstener/BlazingCleanup-Qwen2.5-1.5B-FT-v1 is a 1.5 billion parameter Qwen2.5-based causal language model developed by acstener, fine-tuned from unsloth/Qwen2.5-1.5B-Instruct. This model features a 32768-token context length and was trained using Unsloth and Huggingface's TRL library, enabling 2x faster training. It is optimized for efficient performance in tasks requiring a compact yet capable language model.

Loading preview...

Model Overview

acstener/BlazingCleanup-Qwen2.5-1.5B-FT-v1 is a 1.5 billion parameter language model built upon the Qwen2.5 architecture. Developed by acstener, this model was fine-tuned from unsloth/Qwen2.5-1.5B-Instruct and supports a substantial context length of 32768 tokens.

Key Characteristics

  • Architecture: Based on the Qwen2.5 family, providing a robust foundation for language understanding and generation.
  • Parameter Count: At 1.5 billion parameters, it offers a balance between performance and computational efficiency.
  • Context Length: Features a 32768-token context window, allowing it to process and generate longer sequences of text.
  • Training Efficiency: The model was trained using Unsloth and Huggingface's TRL library, which facilitated a 2x faster training process compared to standard methods.

Use Cases

This model is suitable for applications where a compact yet capable language model with a good context understanding is required. Its efficient training methodology suggests potential for rapid iteration and deployment in various NLP tasks.