huihui-ai/QRWKV6-32B-Instruct-Preview-v0.1-abliterated
TEXT GENERATIONConcurrency Cost:1Model Size:32BQuant:FP8Ctx Length:32kLicense:apache-2.0Architecture:Transformer Open Weights Cold

The huihui-ai/QRWKV6-32B-Instruct-Preview-v0.1-abliterated model is a 32 billion parameter instruction-tuned language model derived from recurusal's QRWKV6-32B-Instruct-Preview-v0.1. This version has been modified using an 'abliteration' technique to remove refusal behaviors, offering an uncensored response capability. It serves as a proof-of-concept for refusal removal without TransformerLens, making it suitable for research into model safety and response generation. The model supports a 32768 token context length.

Loading preview...

Model Overview

The huihui-ai/QRWKV6-32B-Instruct-Preview-v0.1-abliterated is a 32 billion parameter instruction-tuned language model. It is based on recursal/QRWKV6-32B-Instruct-Preview-v0.1 but has undergone a process called "abliteration" to remove refusal behaviors, resulting in an uncensored variant.

Key Characteristics

  • Uncensored Responses: This model is specifically designed to provide responses without the typical refusal mechanisms found in many instruction-tuned LLMs. This is achieved through an "abliteration" technique.
  • Proof-of-Concept: It serves as a demonstration of removing refusals from an LLM without relying on TransformerLens, utilizing methods detailed in the remove-refusals-with-transformers project.
  • 32 Billion Parameters: A substantial model size, indicating a broad capacity for understanding and generating complex language.
  • 32768 Token Context Length: Capable of processing and generating text within a large context window.

Intended Use Cases

  • Research into Model Safety and Alignment: Ideal for researchers exploring methods to modify LLM behavior, particularly concerning censorship and refusal mechanisms.
  • Exploring Unfiltered Language Generation: Useful for applications where unfiltered or direct responses are desired, provided ethical guidelines are followed.
  • Development of Abliteration Techniques: Can be used as a base for further experimentation and refinement of methods to control LLM output characteristics.