jaydenmao/qwen3-32b-toolace-function-calling
TEXT GENERATIONConcurrency Cost:2Model Size:32BQuant:FP8Ctx Length:32kPublished:Mar 18, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The jaydenmao/qwen3-32b-toolace-function-calling model is a 32 billion parameter Qwen3-based language model developed by jaydenmao. Finetuned from unsloth/Qwen3-32B-unsloth-bnb-4bit, this model was trained using Unsloth and Huggingface's TRL library for accelerated finetuning. It is designed for function calling applications, leveraging its Qwen3 architecture and 32768 token context length.

Loading preview...