PetroGPT/WestSeverus-7B-DPO
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kPublished:Jan 24, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Cold
PetroGPT/WestSeverus-7B-DPO is a 7 billion parameter language model developed by PetroGPT. This model is fine-tuned using Direct Preference Optimization (DPO) and features an 8192 token context length. It is designed for general language understanding and generation tasks, offering a robust foundation for various applications.
Loading preview...