parom23/qwen_chess_lora

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:0.5BQuant:BF16Ctx Length:32kPublished:Mar 7, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

The parom23/qwen_chess_lora model is a 0.5 billion parameter instruction-tuned causal language model, fine-tuned from Qwen/Qwen2.5-0.5B-Instruct. This model is specifically adapted for tasks related to chess, demonstrating a low loss of 0.2985 on its evaluation set. Its compact size and specialized fine-tuning make it suitable for applications requiring chess-specific understanding or generation within a 32768-token context.

Loading preview...

Model Overview

The parom23/qwen_chess_lora is a specialized language model, fine-tuned from the Qwen/Qwen2.5-0.5B-Instruct architecture. With 0.5 billion parameters and a context length of 32768 tokens, this model is designed for specific applications rather than general-purpose language tasks.

Key Characteristics

  • Base Model: Fine-tuned from Qwen/Qwen2.5-0.5B-Instruct.
  • Parameter Count: 0.5 billion parameters, making it a relatively compact model.
  • Context Length: Supports a substantial context window of 32768 tokens.
  • Performance: Achieved a loss of 0.2985 on its evaluation set, indicating effective fine-tuning for its intended domain.

Training Details

The model was trained using the following hyperparameters:

  • Learning Rate: 0.0002
  • Batch Size: 16 (train), 8 (eval)
  • Optimizer: ADAMW_TORCH with default betas and epsilon.
  • Epochs: 1
  • Mixed Precision: Native AMP was utilized during training.

Good For

  • Applications requiring a compact model with a focus on chess-related understanding or generation.
  • Scenarios where a specialized, instruction-tuned model for a niche domain is preferred over a general-purpose LLM.
  • Use cases benefiting from a model with a large context window for domain-specific tasks.