Qwen1.5-14B is a 14.2 billion parameter decoder-only transformer-based language model developed by Qwen, serving as the beta version of Qwen2. It offers significant performance improvements in chat models and stable support for a 32K context length across all sizes. This model series is designed for multilingual applications and can be further fine-tuned for specific tasks like text generation after post-training.
No reviews yet. Be the first to review!