willcb/Qwen3-32B
TEXT GENERATIONConcurrency Cost:2Model Size:32BQuant:FP8Ctx Length:32kPublished:Jun 29, 2025Architecture:Transformer0.0K Cold

willcb/Qwen3-32B is a 32 billion parameter language model based on the Qwen architecture, designed for general-purpose text generation. This model is a large-scale, instruction-tuned variant, capable of handling a 32768 token context length. It aims to provide robust performance across various natural language understanding and generation tasks, serving as a foundational model for diverse applications.

Loading preview...