didula-wso2/Qwen3-8B_julia_planning-ep4sft_16bit_vllm
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 24, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
The didula-wso2/Qwen3-8B_julia_planning-ep4sft_16bit_vllm is an 8 billion parameter Qwen3-based language model developed by didula-wso2, fine-tuned for planning tasks. This model was trained using Unsloth and Huggingface's TRL library, achieving 2x faster training. It is designed for applications requiring efficient and specialized language processing, building upon a previously fine-tuned Qwen3-8B model.
Loading preview...