DevopsEmbrace/qwen3_32B_embrace_cpt_IV_e3_unsloth_Baseline_merged_16bit
TEXT GENERATIONConcurrency Cost:2Model Size:32BQuant:FP8Ctx Length:32kPublished:Dec 8, 2025License:apache-2.0Architecture:Transformer Open Weights Cold
The DevopsEmbrace/qwen3_32B_embrace_cpt_IV_e3_unsloth_Baseline_merged_16bit is a 32 billion parameter Qwen3 model developed by DevopsEmbrace, fine-tuned from unsloth/qwen3-32b-bnb-4bit. This model was trained using Unsloth and Huggingface's TRL library, enabling 2x faster training. It is designed for general language tasks, leveraging its large parameter count and efficient training methodology.
Loading preview...