TheBloke/Wizard-Vicuna-13B-Uncensored-HF

TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kPublished:May 13, 2023License:otherArchitecture:Transformer0.2K Cold

TheBloke/Wizard-Vicuna-13B-Uncensored-HF is a 13 billion parameter language model, converted to float16 from Eric Hartford's uncensored training of Wizard-Vicuna 13B. This model is specifically designed without built-in alignment or moralizing responses, allowing for separate, custom alignment additions like RLHF LoRAs. With a 4096-token context length, it serves as a base for developers seeking a neutral foundation for various applications.

Loading preview...

Wizard-Vicuna-13B-Uncensored-HF Overview

This model is a 13 billion parameter language model, provided in float16 Hugging Face format for efficient GPU inference and further conversions. It originates from Eric Hartford's uncensored training of the Wizard-Vicuna 13B model.

Key Characteristics

  • Uncensored Training: Unlike many aligned models, this version was specifically trained with responses containing alignment or moralizing content removed. This design choice provides a neutral base model.
  • Custom Alignment: The primary intent behind its uncensored nature is to enable users to add their own specific alignment, such as through RLHF LoRAs, tailored to their particular use case.
  • Float16 Format: Optimized for storage and use, making it suitable for GPU inference environments.
  • Context Length: Supports a context window of 4096 tokens.

Use Cases

  • Foundation for Custom Alignment: Ideal for developers who need a base model to implement their own ethical guidelines, safety filters, or specific behavioral alignments.
  • Research into Alignment: Useful for studying the effects of different alignment techniques on language models without inherent biases.
  • Applications Requiring Neutrality: Suitable for scenarios where a model's output should not be influenced by pre-existing moralizing or alignment, allowing for user-defined control.