gjyotin305/Qwen2.5-7B-Instruct_old_sft_alpaca_003
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Jan 9, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The gjyotin305/Qwen2.5-7B-Instruct_old_sft_alpaca_003 is a 7.6 billion parameter instruction-tuned causal language model, fine-tuned by gjyotin305 from the Qwen2.5-7B-Instruct base model. It was trained using Unsloth and Huggingface's TRL library, enabling faster training. This model is designed for general instruction-following tasks, leveraging its 131072 token context length for processing extensive inputs.

Loading preview...