SciPhi/SciPhi-Self-RAG-Mistral-7B-32k
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Oct 27, 2023License:mitArchitecture:Transformer0.1K Open Weights Cold

SciPhi-Self-RAG-Mistral-7B-32k is a 7 billion parameter Large Language Model developed by SciPhi, fine-tuned from Mistral-7B-v0.1. This model specializes in Retrieval Augmented Generation (RAG) tasks, having undergone further fine-tuning on the self-rag dataset and other RAG-related instruct datasets. It leverages a Transformer architecture with Grouped-Query Attention and Sliding-Window Attention, making it suitable for applications requiring robust information retrieval and generation.

Loading preview...