acstener/BlazingCleanup-Qwen2.5-1.5B-FT-v1
TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Mar 24, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

acstener/BlazingCleanup-Qwen2.5-1.5B-FT-v1 is a 1.5 billion parameter Qwen2.5-based causal language model developed by acstener, fine-tuned from unsloth/Qwen2.5-1.5B-Instruct. This model features a 32768-token context length and was trained using Unsloth and Huggingface's TRL library, enabling 2x faster training. It is optimized for efficient performance in tasks requiring a compact yet capable language model.

Loading preview...