gjyotin305/Qwen2.5-7B-Instruct_old_sft_alpaca_005
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Jan 9, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

gjyotin305/Qwen2.5-7B-Instruct_old_sft_alpaca_005 is a 7.6 billion parameter Qwen2.5-Instruct model, fine-tuned by gjyotin305. This model was trained using Unsloth and Huggingface's TRL library, achieving 2x faster training. It is designed for instruction-following tasks, leveraging its base Qwen2.5 architecture and a substantial 131,072 token context length.

Loading preview...