huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:14.8BQuant:FP8Ctx Length:32kPublished:Oct 9, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2 is a 14.8 billion parameter instruction-tuned causal language model developed by huihui-ai, based on Qwen/Qwen2.5-14B-Instruct. This model has been modified using an 'abliteration' technique to remove censorship, making it an uncensored variant. It supports a context length of 131072 tokens and is primarily designed for applications requiring an instruction-following model without content restrictions.

Loading preview...

Model Overview

huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2 is a 14.8 billion parameter instruction-tuned language model derived from the Qwen/Qwen2.5-14B-Instruct base. Developed by huihui-ai, this version has undergone a process known as "abliteration" to remove inherent censorship, distinguishing it from its base model. This modification aims to provide a more open-ended and less restricted conversational AI experience.

Key Characteristics

  • Uncensored Output: The primary differentiator is its abliterated nature, designed to generate responses without the content restrictions typically found in instruction-tuned models.
  • Base Model: Built upon the robust Qwen2.5-14B-Instruct architecture, inheriting its multilingual capabilities (supporting languages like Chinese, English, French, Spanish, German, etc.) and instruction-following proficiency.
  • Context Length: Features a substantial context window of 131072 tokens, allowing for extended conversations and processing of lengthy inputs.
  • Improved Version: This v2 iteration is noted as an improvement over its predecessor, Qwen2.5-14B-Instruct-abliterated.

Use Cases

This model is suitable for developers and applications that require an instruction-following large language model with an emphasis on uncensored content generation. It can be integrated into various applications using the Hugging Face transformers library, with example code provided for conversational use. It is also available for use with Ollama.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p