amazingvince/openhermes-7b-dpo
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jan 27, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

amazingvince/openhermes-7b-dpo is an experimental 7 billion parameter DPO-tuned Mistral-based language model with a 4096-token context length. It is a continuation of the OpenHermes 2 model, further fine-tuned with additional code datasets. This model is notable for demonstrating improved performance on non-code benchmarks like TruthfulQA, AGIEval, and GPT4All suite due to its code instruction training, making it suitable for a range of general language understanding and generation tasks.

Loading preview...