Qwen/Qwen1.5-32B is a 32.5 billion parameter, transformer-based decoder-only language model developed by Qwen, serving as a beta version of Qwen2. It supports a stable 32K context length and offers significant performance improvements over previous Qwen models, including enhanced multilingual capabilities. This model is designed for further fine-tuning, such as SFT or RLHF, rather than direct text generation.
No reviews yet. Be the first to review!