neovalle/H4rmoniousBreezeDPO
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Oct 30, 2023License:mitArchitecture:Transformer Open Weights Cold

neovalle/H4rmoniousBreezeDPO is a 7 billion parameter Mistral-based language model developed by Jorge Vallego and funded by Neovalle Ltd. It is fine-tuned from HuggingFaceH4/zephyr-7b-beta using Direct Preference Optimization (DPO) with the H4rmony_dpo dataset. This model is primarily intended as a proof-of-concept to demonstrate the effects of ecological alignment through ecolinguistics principles, focusing on specific testing and evaluation of the H4rmony dataset.

Loading preview...