Overview
This model, digitalpipelines/llama2_13b_chat_uncensored, is a 13 billion parameter variant of the Llama-2 Chat architecture, fine-tuned by digitalpipelines. Its primary distinction lies in its training on an uncensored/unfiltered Wizard-Vicuna conversation dataset (digitalpipelines/wizard_vicuna_70k_uncensored). This fine-tuning process, utilizing QLoRA and subsequent merging, aims to mitigate the inherent biases present in the original Llama-2 model, enabling more direct and less restricted conversational outputs.
Key Capabilities
- Uncensored Responses: Designed to provide unfiltered and less restricted conversational outputs compared to standard Llama-2 Chat models.
- Llama-2 Architecture: Benefits from the robust base architecture of Llama-2 13B.
- Context Length: Supports a context window of 4096 tokens.
Good For
- Applications requiring a conversational AI with fewer content restrictions.
- Research into model bias and the effects of uncensored fine-tuning.
- Use cases where the default safety mechanisms of Llama-2 are deemed too restrictive.
Prompt Template
The model uses the standard Llama-2-Chat prompt format, including a system prompt emphasizing helpful, respectful, and honest assistance, while acknowledging the model's uncensored nature in practice.