MaziyarPanahi/Yarn-Mistral-7b-64k-Mistral-7B-Instruct-v0.1
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jan 16, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

MaziyarPanahi/Yarn-Mistral-7b-64k-Mistral-7B-Instruct-v0.1 is a 7 billion parameter instruction-tuned language model, created by merging Mistral-7B-Instruct-v0.1 and NousResearch/Yarn-Mistral-7b-64k. This model leverages the Mistral architecture and is specifically designed to handle extended context lengths up to 64k tokens, making it suitable for tasks requiring processing large amounts of text. It combines the instruction-following capabilities of Mistral-7B-Instruct with the long-context handling of Yarn-Mistral-7b-64k.

Loading preview...