TheBloke/Wizard-Vicuna-7B-Uncensored-HF
TheBloke/Wizard-Vicuna-7B-Uncensored-HF is a 7 billion parameter language model, converted to float16 from Eric Hartford's 'uncensored' training of Wizard-Vicuna 7B. This model is specifically designed to remove built-in alignment and moralizing responses, allowing for separate, customizable alignment through methods like RLHF LoRA. It offers a 4096 token context length and is suitable for GPU inference and further model conversions.
Loading preview...
Wizard-Vicuna-7B-Uncensored-HF Overview
This model is a float16 High-Frequency (HF) conversion of Eric Hartford's 'uncensored' Wizard-Vicuna 7B. The primary distinction of this 7 billion parameter model is its deliberate removal of inherent alignment and moralizing responses during training. This design choice allows developers to implement their own specific alignment mechanisms, such as RLHF LoRA, independently.
Key Capabilities
- Uncensored Base: Provides a foundation free from pre-built ethical or moral guardrails, offering maximum flexibility for custom alignment.
- Float16 Format: Optimized for easier storage and efficient GPU inference.
- Conversion Ready: Suitable as a base for further model conversions and fine-tuning.
- 4096 Token Context: Supports processing of moderately long inputs and generating coherent responses within this window.
Good For
- Developers requiring a highly flexible base model where alignment can be entirely customized.
- Research into different alignment techniques without interference from pre-existing model biases.
- Applications where specific, non-standard ethical or content guidelines need to be implemented post-training.