PetroGPT/Voldemort-10B-DPO
TEXT GENERATIONConcurrency Cost:1Model Size:10.7BQuant:FP8Ctx Length:4kPublished:Jan 20, 2024License:apache-2.0Architecture:Transformer Open Weights Cold

PetroGPT/Voldemort-10B-DPO is a 10.7 billion parameter language model developed by PetroGPT. This model is a DPO-tuned variant, indicating optimization through Direct Preference Optimization. Its primary strength lies in its fine-tuned nature, making it suitable for tasks benefiting from preference-based alignment. The model has a context length of 4096 tokens.

Loading preview...