gqd/mistral-merge-7b
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jan 6, 2024License:unlicenseArchitecture:Transformer0.0K Cold

gqd/mistral-merge-7b is a 7 billion parameter language model created by gqd, formed by linearly merging teknium/OpenHermes-2.5-Mistral-7B and Open-Orca/Mistral-7B-SlimOrca. This model leverages the strengths of its constituent Mistral-7B based models, offering a 4096-token context length. It is designed to combine the instruction-following and conversational capabilities of its merged components.

Loading preview...