mansi-budamagunta/chess-qwen-lora-v1
TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Mar 7, 2026Architecture:Transformer Warm

The mansi-budamagunta/chess-qwen-lora-v1 is a 1.5 billion parameter language model, likely based on the Qwen architecture, fine-tuned for specific applications. With a context length of 32768 tokens, this model is designed for tasks requiring processing of extensive input. Its primary differentiator lies in its specialized fine-tuning, suggesting optimized performance for particular use cases rather than general-purpose language generation.

Loading preview...