EmbeddedLLM/Mistral-7B-Merge-02-v0
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Dec 20, 2023License:apache-2.0Architecture:Transformer Open Weights Cold

EmbeddedLLM/Mistral-7B-Merge-02-v0 is a 7 billion parameter language model based on the Mistral-7B-v0.1 architecture, created by EmbeddedLLM. This model is an experimental merge of teknium/OpenHermes-2.5-Mistral-7B and Intel/neural-chat-7b-v3-3 using the DARE TIES method. It is designed to explore the effectiveness of different merging techniques compared to SLERP, offering insights into combining specialized models.

Loading preview...