didula-wso2/Qwen3-8B_julia_clean-codenet_clean-alpacasft_16bit_vllm
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 19, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
The didula-wso2/Qwen3-8B_julia_clean-codenet_clean-alpacasft_16bit_vllm is an 8 billion parameter Qwen3 model developed by didula-wso2, fine-tuned from unsloth/qwen3-8b-unsloth-bnb-4bit. This model was trained using Unsloth and Huggingface's TRL library, enabling faster training. It is designed for general language tasks, leveraging its Qwen3 architecture and 32768 token context length.
Loading preview...