Qwen1.5-4B is a 4 billion parameter decoder-only transformer language model developed by Qwen, serving as a beta version of Qwen2. This model supports a stable 32K context length and features an improved tokenizer for multilingual and code adaptability. It is designed for further fine-tuning, such as SFT or RLHF, rather than direct text generation.
No reviews yet. Be the first to review!