didula-wso2/Qwen3-8B_julia_alpaca2_codenetsft_16bit_vllm
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 19, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
The didula-wso2/Qwen3-8B_julia_alpaca2_codenetsft_16bit_vllm is an 8 billion parameter Qwen3-based causal language model developed by didula-wso2, fine-tuned using Unsloth and Huggingface's TRL library. This model is optimized for efficient training and inference, leveraging 16-bit quantization and vLLM for performance. It is designed for general language tasks, benefiting from its Qwen3 architecture and specialized fine-tuning process.
Loading preview...