MaziyarPanahi/MetaMath-Mistral-7B-Mistral-7B-Instruct-v0.1
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jan 16, 2024License:apache-2.0Architecture:Transformer Open Weights Cold

MetaMath-Mistral-7B-Mistral-7B-Instruct-v0.1 is a 7 billion parameter language model created by MaziyarPanahi, formed by merging Mistral-7B-Instruct-v0.1 and MetaMath-Mistral-7B. This model leverages a slerp merge method to combine the strengths of both base models, offering enhanced capabilities for instruction-following and mathematical reasoning tasks. It is designed for general-purpose text generation with a focus on improved performance in areas where both base models excel.

Loading preview...