simonycl/GLM-4-9B-0414-InverseIFEval-DPO
TEXT GENERATIONConcurrency Cost:1Model Size:9BQuant:FP8Ctx Length:32kPublished:Mar 24, 2026Architecture:Transformer Cold

The simonycl/GLM-4-9B-0414-InverseIFEval-DPO model is a 9 billion parameter language model, fine-tuned from THUDM/GLM-4-9B-0414 using Direct Preference Optimization (DPO). This model leverages a 32K context length and is specifically trained to align with human preferences, making it suitable for generating high-quality, preferred text responses. Its DPO training aims to enhance its ability to produce outputs that are favored over alternatives.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p