0-hero/Matter-0.1-7B-DPO-preview
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Mar 19, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

The 0-hero/Matter-0.1-7B-DPO-preview is a 7 billion parameter language model, a DPO-finetuned version of Matter 7B, developed by 0-hero. It is fine-tuned on the Matter dataset, curated from over 35 datasets analyzing more than 6 billion tokens, and supports a 4096-token context length. This model is notable for its explicit support for function calling, enabling integration with external tools and APIs. It is designed for conversational AI applications requiring structured interaction and tool use.

Loading preview...