recursal/QRWKV6-32B-Instruct-Preview-v0.1 is a 32 billion parameter instruction-tuned RWKV model, representing one of the largest and strongest RWKV variants to date. Developed by Recursal, this model leverages a novel conversion technique to transform QKV Attention-based models like Qwen into an RWKV architecture without requiring retraining from scratch. It offers significant computational cost reductions for large context lengths, demonstrating over 1000x improvement in inference cost efficiency compared to traditional transformer models.