maywell/Synatra-7B-v0.3-dpo
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Nov 8, 2023License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

Synatra-7B-v0.3-dpo is a 7 billion parameter language model developed by maywell, fine-tuned from Mistral-7B-Instruct-v0.1. This model is optimized for conversational AI, supporting both ChatML and Alpaca instruction formats. It demonstrates strong performance in Korean language understanding tasks, particularly in BoolQ and SentiNeg benchmarks, making it suitable for applications requiring robust Korean language processing.

Loading preview...