parom23/qwen_chess_lora
TEXT GENERATIONConcurrency Cost:1Model Size:0.5BQuant:BF16Ctx Length:32kPublished:Mar 7, 2026License:apache-2.0Architecture:Transformer Open Weights Warm
The parom23/qwen_chess_lora model is a 0.5 billion parameter instruction-tuned causal language model, fine-tuned from Qwen/Qwen2.5-0.5B-Instruct. This model is specifically adapted for tasks related to chess, demonstrating a low loss of 0.2985 on its evaluation set. Its compact size and specialized fine-tuning make it suitable for applications requiring chess-specific understanding or generation within a 32768-token context.
Loading preview...