stephenlzc/Mistral-7B-v0.3-Chinese-Chat-uncensored

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jun 24, 2024License:mitArchitecture:Transformer0.0K Open Weights Cold

stephenlzc/Mistral-7B-v0.3-Chinese-Chat-uncensored is a 7 billion parameter language model, fine-tuned from shenzhi-wang/Mistral-7B-v0.3-Chinese-Chat. This model is specifically designed to be uncensored, leveraging the Unsloth framework for its training. It specializes in Chinese language chat applications, offering a less restricted conversational experience.

Loading preview...

Model Overview

This model, stephenlzc/Mistral-7B-v0.3-Chinese-Chat-uncensored, is a 7 billion parameter language model derived from shenzhi-wang/Mistral-7B-v0.3-Chinese-Chat. Its primary distinction is its uncensored nature, achieved through fine-tuning with the Unsloth framework.

Key Capabilities

  • Uncensored Responses: Designed to provide less restricted outputs compared to its base model.
  • Chinese Language Focus: Optimized for chat and conversational tasks in Chinese.
  • Mistral-7B Architecture: Benefits from the efficient and capable Mistral-7B base.

Training Details

The model was fine-tuned using a combination of Chinese datasets, including Minami-su/toxic-sft-zh, llm-wizard/alpaca-gpt4-data-zh, and stephenlzc/stf-alpaca. The training was conducted on a single A100 SXM4 80GB GPU, utilizing a PyTorch 2.2.0 environment.

Good for

  • Applications requiring unfiltered Chinese conversational AI.
  • Research into uncensored language model behavior.
  • Developers seeking a Mistral-7B variant with specific Chinese language and uncensored characteristics.