tinyllms/qwen2.5-7b-instruct-sft-game24-qlora-16384
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Mar 15, 2026Architecture:Transformer Cold

The tinyllms/qwen2.5-7b-instruct-sft-game24-qlora-16384 model is a 7.6 billion parameter instruction-tuned language model, fine-tuned from Qwen/Qwen2.5-7B-Instruct using QLoRA. It features a substantial 16384 token context length and is specifically optimized for tasks related to the Game24 problem. This model's training focused on generating completions for specific prompts, making it suitable for structured reasoning and problem-solving within its specialized domain.

Loading preview...