didula-wso2/Qwen3-8B_julia_planning_500-ep4sft_16bit_vllm
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 25, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The didula-wso2/Qwen3-8B_julia_planning_500-ep4sft_16bit_vllm is an 8 billion parameter Qwen3 model, fine-tuned by didula-wso2. This model was trained using Unsloth and Huggingface's TRL library, achieving 2x faster training speeds. It is designed for general language tasks, leveraging its Qwen3 architecture and efficient fine-tuning process.

Loading preview...