Qwen2.5-14B-Gutenberg-1e-Delta is a 14.8 billion parameter language model based on the Qwen2.5-14B-Instruct architecture, fine-tuned by v000000. This model is specifically trained for 1.25 epochs using DPO on the jondurbin/gutenberg-dpo-v0.1 dataset, focusing on improving its performance through direct preference optimization. It features a substantial 131,072 token context length, making it suitable for tasks requiring extensive contextual understanding and generation.
No reviews yet. Be the first to review!