spar-project/Qwen2.5-7B-Instruct-layers-16-24
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Apr 1, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The spar-project/Qwen2.5-7B-Instruct-layers-16-24 is a 7.6 billion parameter instruction-tuned language model, developed by spar-project. It is finetuned from unsloth/Qwen2.5-7B-Instruct and optimized for faster training using Unsloth and Huggingface's TRL library. This model is designed for general instruction-following tasks, leveraging its efficient training methodology.

Loading preview...