jpacifico/Chocolatine-2-14B-Instruct-v2.0.3 is a 14.8 billion parameter instruction-tuned causal language model developed by Jonathan Pacifico, built on the Qwen-2.5-14B architecture. Fine-tuned using DPO with a French RLHF dataset, it excels in French language tasks, achieving top rankings on the French Government LLM Leaderboard and strong performance on MT-Bench-French, closely matching GPT-4o-mini. This model is optimized for high-performance French language generation and understanding, supporting a context length of up to 128K tokens.
No reviews yet. Be the first to review!