mehuldamani/sft-qwen-zmaze-v3
TEXT GENERATIONConcurrency Cost:1Model Size:3.1BQuant:BF16Ctx Length:32kPublished:Mar 31, 2026Architecture:Transformer Loading

The mehuldamani/sft-qwen-zmaze-v3 is a 3.1 billion parameter instruction-tuned causal language model based on the Qwen architecture. This model is fine-tuned for specific tasks, though the exact nature of its specialization is not detailed in the provided information. With a context length of 32768 tokens, it is designed for applications requiring processing of extensive input sequences. Its primary utility lies in its ability to follow instructions for various language generation tasks.

Loading preview...