parthbijpuriya/qwen2.5-7b-finetuned-v2
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Apr 10, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The parthbijpuriya/qwen2.5-7b-finetuned-v2 is a 7.6 billion parameter Qwen2.5 model developed by parthbijpuriya, fine-tuned from unsloth/qwen2.5-7b-unsloth-bnb-4bit. This model was trained using Unsloth and Huggingface's TRL library, enabling 2x faster training. It is designed for general language tasks, leveraging its efficient fine-tuning process to provide a capable language model.

Loading preview...