socratesft/socrates-qwen2.5-14b-dpo
TEXT GENERATIONConcurrency Cost:1Model Size:14.8BQuant:FP8Ctx Length:32kPublished:Aug 31, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

The socratesft/socrates-qwen2.5-14b-dpo model is a 14.8 billion parameter language model developed by socratesft, built upon the Qwen2.5-14B-Instruct architecture. It has been fine-tuned using Direct Preference Optimization (DPO) on the SocSci210 dataset, specializing in simulating survey respondent behavior. This model is designed for tasks requiring nuanced, instruction-following responses based on specific demographic profiles and survey questions, offering a context length of 131072 tokens.

Loading preview...