0-hero/Matter-0.1-Slim-7B-C-DPO
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Mar 17, 2024License:apache-2.0Architecture:Transformer Open Weights Cold

The 0-hero/Matter-0.1-Slim-7B-C-DPO is a 7 billion parameter language model, a DPO-finetuned variant of Mistral 7B. It was continuously full-finetuned on the slim-C version of the Matter dataset, which is curated from over 35 datasets analyzing more than 6 billion tokens. This model is specifically designed to support function calling, enabling integration with external tools and APIs. Its primary strength lies in its ability to process and respond to complex instructions, including those requiring tool use.

Loading preview...