bfavro73/qwen2.5-coder-7b-pandas-dpo-aligned
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Mar 24, 2026Architecture:Transformer Cold

bfavro73/qwen2.5-coder-7b-pandas-dpo-aligned is a 7 billion parameter language model fine-tuned from Qwen2.5-Coder-7B-Instruct. This model specializes in Python data analysis, having been fine-tuned using offline DPO with a preference dataset for this specific task. It offers a 128K token context window and is optimized for coding agents, maintaining strong capabilities in mathematics and general competencies.

Loading preview...