vivekmdrift/maya-qwen-7b
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Mar 4, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

The vivekmdrift/maya-qwen-7b is a 7.6 billion parameter instruction-tuned causal language model, fine-tuned from Qwen/Qwen2.5-7B-Instruct. It is specifically optimized for customer support conversations, leveraging a 32768-token context length. This model is designed to provide helpful responses in customer service interactions.

Loading preview...