SciPhi/SciPhi-Mistral-7B-32k
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Oct 16, 2023License:mitArchitecture:Transformer0.1K Open Weights Cold
SciPhi/SciPhi-Mistral-7B-32k is a 7 billion parameter Large Language Model (LLM) fine-tuned from Mistral-7B-v0.1 by SciPhi. This model underwent fine-tuning over four epochs using over 1 billion tokens, including instruction tuning data and synthetic textbooks. Its primary objective is to enhance scientific reasoning and educational abilities, making it suitable for tasks requiring advanced academic understanding.
Loading preview...