MaziyarPanahi/SciPhi-Self-RAG-Mistral-7B-32k-Mistral-7B-Instruct-v0.2-slerp
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jan 11, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Cold
MaziyarPanahi/SciPhi-Self-RAG-Mistral-7B-32k-Mistral-7B-Instruct-v0.2-slerp is a 7 billion parameter language model created by MaziyarPanahi, resulting from a slerp merge of Mistral-7B-Instruct-v0.2 and SciPhi-Self-RAG-Mistral-7B-32k. This model integrates the instruction-following capabilities of Mistral-7B-Instruct-v0.2 with the Retrieval-Augmented Generation (RAG) optimizations from SciPhi-Self-RAG-Mistral-7B-32k, making it suitable for tasks requiring both general instruction adherence and enhanced factual grounding. It leverages a 4096 token context length from its base Mistral architecture.
Loading preview...