princeton-nlp/Mistral-7B-Instruct-SimPO
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:May 24, 2024Architecture:Transformer0.0K Cold

The princeton-nlp/Mistral-7B-Instruct-SimPO is a 7 billion parameter instruction-tuned language model based on the Mistral architecture. This model is designed for general-purpose conversational AI tasks, leveraging its instruction-following capabilities. It processes inputs up to a context length of 4096 tokens, making it suitable for various natural language understanding and generation applications.

Loading preview...