Weyaxi/MetaMath-NeuralHermes-2.5-Mistral-7B-Linear
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Dec 5, 2023License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

Weyaxi/MetaMath-NeuralHermes-2.5-Mistral-7B-Linear is a 7 billion parameter language model created by Weyaxi, built by linearly merging MetaMath-Mistral-7B and NeuralHermes-2.5-Mistral-7B. This model is designed for enhanced mathematical reasoning and general instruction following, leveraging the strengths of its base models. It processes inputs with a context length of 4096 tokens, making it suitable for tasks requiring both logical deduction and broad conversational abilities.

Loading preview...