kaist-ai/mistral-orpo-capybara-7k
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kPublished:Mar 23, 2024License:mitArchitecture:Transformer0.0K Open Weights Warm

kaist-ai/mistral-orpo-capybara-7k is a 7 billion parameter language model developed by KAIST AI, fine-tuned from Mistral-7B-v0.1 using the Odds Ratio Preference Optimization (ORPO) method. This model is specifically optimized for multi-turn conversational tasks, leveraging a distilled Capybara dataset. It demonstrates strong performance in alignment benchmarks like MT-Bench and AlpacaEval, making it suitable for dialogue-based applications.

Loading preview...