Quyen-Pro-Max-v0.1 by vilm is a 72.3 billion parameter large language model based on the Qwen1.5 family, featuring a 32768-token context length. It was trained using Supervised Fine-Tuning (SFT) and Direct Preference Optimization (DPO) on a diverse dataset including OpenHermes-2.5, Capyabara, and private data. This model is designed for general-purpose conversational AI, leveraging its extensive parameter count for robust language understanding and generation.
No reviews yet. Be the first to review!