didula-wso2/Qwen3-8B_julia_alpaca_ep4sft_16bit_vllm
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 20, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The didula-wso2/Qwen3-8B_julia_alpaca_ep4sft_16bit_vllm is an 8 billion parameter Qwen3 model, developed by didula-wso2, and fine-tuned using Unsloth and Huggingface's TRL library. This model leverages a 32768 token context length and is notable for its accelerated training process. It is designed for general language tasks, benefiting from the Qwen3 architecture and efficient fine-tuning methods.

Loading preview...