Orion-zhen/Qwen2.5-14B-Instruct-Uncensored

TEXT GENERATIONConcurrency Cost:1Model Size:14.8BQuant:FP8Ctx Length:32kPublished:Oct 21, 2024License:gpl-3.0Architecture:Transformer0.0K Open Weights Cold

Orion-zhen/Qwen2.5-14B-Instruct-Uncensored is a 14.8 billion parameter instruction-tuned language model based on the Qwen2.5-14B-Instruct architecture. Developed by Orion-zhen, this model is specifically fine-tuned with an unalignment dataset to remove censorship, making it suitable for applications requiring unrestricted AI responses. It supports multiple languages including Chinese, English, and Japanese, and is designed for use cases where an uncensored system prompt is desired.

Loading preview...

Qwen2.5-14B-Instruct-Uncensored Overview

This model, developed by Orion-zhen, is an uncensored fine-tuned version of the Qwen/Qwen2.5-14B-Instruct base model. It features 14.8 billion parameters and has been specifically trained using an unalignment dataset to remove typical AI restrictions and biases. The primary differentiator of this model is its uncensored nature, which is activated by using a specific system prompt.

Key Capabilities

  • Uncensored Responses: Designed to provide unrestricted outputs, bypassing typical safety filters.
  • Multilingual Support: Capable of processing and generating text in numerous languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, and Arabic.
  • Instruction Following: Retains the instruction-following capabilities of its base Qwen2.5-14B-Instruct model.

Good for

  • Research into AI alignment and safety: Useful for studying the effects of removing censorship from large language models.
  • Applications requiring unrestricted content generation: Suitable for use cases where standard content filters are undesirable or counterproductive.
  • Exploring model behavior without imposed constraints: Provides a platform to observe how a powerful LLM responds when not bound by typical ethical or safety guidelines.