parthbijpuriya/qwen2.5-finetuned-merged
TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Apr 8, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The parthbijpuriya/qwen2.5-finetuned-merged model is a 1.5 billion parameter Qwen2.5-based causal language model developed by parthbijpuriya. It was finetuned using Unsloth and Huggingface's TRL library, enabling 2x faster training. This model is optimized for efficient deployment and inference, leveraging its smaller parameter count and specialized training methodology.

Loading preview...