mncai/mistral-7b-dpo-merge-v1.1
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Dec 17, 2023License:apache-2.0Architecture:Transformer Open Weights Cold
The mncai/mistral-7b-dpo-merge-v1.1 is a 7 billion parameter instruction-tuned language model developed by MindsAndCompany, based on the Mistral architecture. This model is a merge of several DPO-tuned models, including mncai/mistral-7b-dpo-v6, rwitz2/go-bruins-v2.1.1, ignos/LeoScorpius-GreenNode-Alpaca-7B-v1, and janai-hq/trinity-v1, utilizing a TIES merging method. It is designed for general instruction following tasks, leveraging its merged DPO fine-tuning for improved conversational capabilities. The model supports a context length of 4096 tokens.
Loading preview...