didula-wso2/Qwen3-8B_julia_alpaca_extendedsft_16bit_vllm
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 18, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
The didula-wso2/Qwen3-8B_julia_alpaca_extendedsft_16bit_vllm is an 8 billion parameter Qwen3 model developed by didula-wso2. This model was finetuned using Unsloth and Huggingface's TRL library, enabling 2x faster training. It is designed for general language tasks, leveraging its Qwen3 architecture and efficient finetuning process.
Loading preview...