lvkaokao/mistral-7b-finetuned-orca-dpo-v1
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kLicense:apache-2.0Architecture:Transformer Open Weights Cold

The lvkaokao/mistral-7b-finetuned-orca-dpo-v1 is a 7 billion parameter language model, fine-tuned from mistralai/Mistral-7B-v0.1 for instruction following. This model specializes in responding to instructions, making it suitable for various conversational and task-oriented applications. It leverages the Mistral 7B architecture and has a context length of 4096 tokens.

Loading preview...