0-hero/Matter-0.1-7B-boost-DPO-preview
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Mar 21, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

The 0-hero/Matter-0.1-7B-boost-DPO-preview is a 7 billion parameter language model developed by 0-hero, fine-tuned from a Mistral 7B base using Direct Preference Optimization (DPO). It is specifically trained on the curated Matter dataset, which analyzes over 6 billion tokens, and features native support for function calling. This model is designed for conversational AI applications requiring structured tool use and adherence to the ChatML prompt format.

Loading preview...