0-hero/Matter-0.1-7B-DPO-preview

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Mar 19, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

The 0-hero/Matter-0.1-7B-DPO-preview is a 7 billion parameter language model, a DPO-finetuned version of Matter 7B, developed by 0-hero. It is fine-tuned on the Matter dataset, curated from over 35 datasets analyzing more than 6 billion tokens, and supports a 4096-token context length. This model is notable for its explicit support for function calling, enabling integration with external tools and APIs. It is designed for conversational AI applications requiring structured interaction and tool use.

Loading preview...

Matter-0.1-7B-DPO-preview Overview

This model, developed by 0-hero, is a 7 billion parameter language model, specifically a DPO (Direct Preference Optimization) fine-tuned variant of the original Matter 7B. It leverages a 4096-token context window and is trained on the extensive Matter dataset, which aggregates data from over 35 distinct datasets, analyzing more than 6 billion tokens.

Key Capabilities

  • DPO Fine-tuning: Optimized using Direct Preference Optimization for improved response quality and alignment.
  • Function Calling: Explicitly supports function calling, allowing the model to interact with external tools and APIs. This includes dedicated tokens for initiating and responding to function calls.
  • ChatML Format: Utilizes the ChatML prompt format for structured conversational inputs, ensuring consistent interaction.

Good For

  • Conversational AI: Ideal for building chatbots and virtual assistants that require structured dialogue.
  • Tool-Augmented Applications: Excellent for use cases where the LLM needs to invoke external functions or retrieve real-time information, such as news aggregators or data query systems.
  • Developers: Provides a clear and documented approach for integrating function calling into AI applications.