maywell/Qwen2-7B-Multilingual-RP

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Jun 24, 2024License:apache-2.0Architecture:Transformer0.1K Open Weights Warm

The maywell/Qwen2-7B-Multilingual-RP model is a 7 billion parameter Qwen2-based language model developed by Wanot AI, Inc. It features a 32k context length and is partly optimized for ERP (Enterprise Resource Planning) related tasks. This model is specifically trained for multilingual applications, with a focus on roleplay capabilities, making it suitable for diverse conversational AI scenarios.

Loading preview...

Overview

maywell/Qwen2-7B-Multilingual-RP is a 7 billion parameter language model built on the Qwen2 architecture, developed by Wanot AI, Inc. It supports a substantial context length of 32,768 tokens, making it capable of handling extensive conversational inputs. The model is partly designed for ERP (Enterprise Resource Planning) applications and utilizes a ChatML prompt template for interaction.

Key Capabilities

  • Multilingual Roleplay: The model is specifically fine-tuned for multilingual roleplay scenarios, suggesting proficiency in generating diverse character-based interactions across languages.
  • Extended Context Window: With a 32k (32,768) token context length, it can maintain coherence and context over long conversations or complex prompts.
  • Qwen2 Architecture: Leverages the robust Qwen2 base, known for its strong performance across various language understanding and generation tasks.

Training Details

The model underwent extensive training, accumulating over 1,000 GPU hours on A100 80G SXM * 8 GPUs, processing more than 2 billion tokens. This significant training effort contributes to its capabilities in multilingual and roleplay contexts.

Good for

  • Developing conversational AI agents requiring multilingual support.
  • Applications involving character-based interactions or roleplay scenarios.
  • Use cases that benefit from a large context window for maintaining long-term memory or complex dialogue flows.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p