mii-community/zefiro-7b-dpo-ITA
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kPublished:Feb 20, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

Zefiro-7b-dpo-ITA is a 7 billion parameter DPO fine-tuned causal language model developed by giux78, specifically optimized for the Italian language. Based on Zefiro-7b-sft-ITA and inspired by the Zephyr model, it excels in conversational tasks in Italian. This model offers strong performance in Italian language understanding and generation, making it suitable for various Italian NLP applications.

Loading preview...