Qwen1.5-0.5B is a 0.6 billion parameter, decoder-only transformer language model developed by Qwen. As a beta version of Qwen2, it features stable 32K context length support and multilingual capabilities. This base model is designed for further fine-tuning, such as SFT or RLHF, rather than direct text generation.
No reviews yet. Be the first to review!