nijumich/Qwen2.5-7B-Instruct-recipieNLG_V1-1ep-20260405-224407-ft-1gpu
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Apr 6, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

This is a 7.6 billion parameter Qwen2.5-Instruct model developed by nijumich, fine-tuned for specific applications. It was trained using Unsloth and Huggingface's TRL library, enabling faster fine-tuning. With a 32,768 token context length, this model is optimized for tasks requiring efficient processing of long sequences.

Loading preview...