EmbeddedLLM/Mistral-7B-Merge-14-v0.3
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Dec 19, 2023License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

EmbeddedLLM/Mistral-7B-Merge-14-v0.3 is a 7 billion parameter language model based on the Mistral architecture, developed by EmbeddedLLM. This model is an experimental merge of 14 different Mistral-7B variants using the DARE TIES method, designed to combine their strengths. It offers a 4096-token context length and aims to provide a robust base model for various applications, particularly those benefiting from a blend of capabilities from its constituent models.

Loading preview...