didula-wso2/Qwen3-8B_julia_clean-alpacasft_16bit_vllm
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 19, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
The didula-wso2/Qwen3-8B_julia_clean-alpacasft_16bit_vllm is an 8 billion parameter Qwen3 model, developed by didula-wso2, fine-tuned using Unsloth and Huggingface's TRL library. This model was trained for efficiency, achieving 2x faster training times. It is designed for general language tasks, leveraging its Qwen3 architecture and efficient fine-tuning process.
Loading preview...