didula-wso2/Qwen3-8B_julia_planning_alpaca500-ep4sft_16bit_vllm
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 25, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The didula-wso2/Qwen3-8B_julia_planning_alpaca500-ep4sft_16bit_vllm is an 8 billion parameter Qwen3 model developed by didula-wso2, fine-tuned for planning tasks. This model was trained using Unsloth and Huggingface's TRL library, achieving 2x faster training. It is designed for applications requiring efficient and accelerated large language model inference.

Loading preview...