didula-wso2/qwen3-8B_sft-balsft_16bit_vllm
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 23, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The didula-wso2/qwen3-8B_sft-balsft_16bit_vllm is a Qwen3-based language model, fine-tuned by didula-wso2. This model was specifically optimized for faster training using Unsloth and Huggingface's TRL library, making it efficient for deployment in vLLM environments. It is designed for general language tasks, leveraging its Qwen3 architecture for robust performance.

Loading preview...