mychen76/mistral-7b-merged-ties
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Mar 9, 2024License:apache-2.0Architecture:Transformer Open Weights Cold

mychen76/mistral-7b-merged-ties is a 7 billion parameter language model created by mychen76, formed by merging mistralai/Mistral-7B-v0.1, OpenPipe/mistral-ft-optimized-1218, and mlabonne/NeuralHermes-2.5-Mistral-7B using the TIES merging method. This model leverages the Mistral architecture and is optimized for general language understanding and generation tasks, demonstrating strong performance across various benchmarks. It is suitable for applications requiring a capable 7B model with a 4096 token context length.

Loading preview...