socratesft/socrates-llama3-8b-dpo
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Aug 31, 2025License:llama3Architecture:Transformer Cold

socratesft/socrates-llama3-8b-dpo is an 8 billion parameter language model developed by socratesft, fine-tuned using Direct Preference Optimization (DPO) on the SocSci210 dataset. Derived from Meta-Llama-3-8B-Instruct, this model specializes in simulating survey respondent behavior and generating responses based on specified demographic profiles and instructions. Its primary use case is for social science research and applications requiring nuanced, persona-driven text generation.

Loading preview...