tinyllms/qwen2.5-7b-instruct-sft-game24-qlora
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Mar 15, 2026Architecture:Transformer Cold

The tinyllms/qwen2.5-7b-instruct-sft-game24-qlora model is a 7.6 billion parameter instruction-tuned language model, fine-tuned from Qwen/Qwen2.5-7B-Instruct. It was specifically trained using QLoRA on the tinyllms/game24-trajectories dataset, making it highly specialized for solving the Game24 mathematical puzzle. This model excels at generating solutions for Game24 problems, leveraging its targeted fine-tuning for this specific reasoning task.

Loading preview...