didula-wso2/Qwen3-8B_julia_planning_alpaca-ep4sft_16bit_vllm
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 24, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The didula-wso2/Qwen3-8B_julia_planning_alpaca-ep4sft_16bit_vllm is an 8 billion parameter Qwen3 model, fine-tuned by didula-wso2. This model was specifically optimized for training speed, utilizing Unsloth and Huggingface's TRL library to achieve 2x faster training. It is designed for general language tasks, building upon its base Qwen3 architecture with enhanced efficiency.

Loading preview...