m-a-p/Infinity-Instruct-3M-0625-Mistral-7B-COIG-P

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Apr 2, 2025Architecture:Transformer Cold

The m-a-p/Infinity-Instruct-3M-0625-Mistral-7B-COIG-P is a 7 billion parameter instruction-tuned language model based on the Mistral architecture, with a 4096 token context length. It is derived from the COIG-P paper, focusing on alignment with human values through a high-quality Chinese preference dataset. This model is intended for tasks requiring instruction following, particularly in contexts where human value alignment is critical.

Loading preview...

Model Overview

The m-a-p/Infinity-Instruct-3M-0625-Mistral-7B-COIG-P is a 7 billion parameter language model built upon the Mistral architecture, featuring a 4096 token context window. This model is a direct result of the research presented in the COIG-P paper, which introduces a large-scale and high-quality Chinese preference dataset designed for aligning models with human values.

Key Characteristics

  • Architecture: Mistral-7B base model.
  • Parameter Count: 7 billion parameters.
  • Context Length: Supports a context window of 4096 tokens.
  • Alignment Focus: Specifically fine-tuned using the COIG-P dataset, emphasizing alignment with human values, particularly in a Chinese context.

Intended Use Cases

While specific direct use cases are not detailed in the provided README, the model's foundation in the COIG-P dataset suggests its primary utility lies in applications requiring:

  • Instruction Following: Responding to user instructions in a coherent and aligned manner.
  • Value Alignment: Generating outputs that are consistent with human preferences and ethical considerations, as informed by the COIG-P dataset.
  • Chinese Language Processing: Given the origin of the COIG-P dataset, it is likely optimized for tasks involving the Chinese language, though this is not explicitly stated as its sole language.