gjyotin305/Qwen2.5-7B-Instruct_old_sft_alpaca_009
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Jan 8, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The gjyotin305/Qwen2.5-7B-Instruct_old_sft_alpaca_009 is a 7.6 billion parameter instruction-tuned causal language model developed by gjyotin305, based on the Qwen2.5-7B-Instruct architecture. This model was finetuned using Unsloth and Huggingface's TRL library, enabling 2x faster training. With a substantial 131,072 token context length, it is optimized for efficient processing of long sequences and instruction-following tasks.

Loading preview...