NaniDAO/Meta-Llama-3.1-8B-Instruct-ablated-v1
NaniDAO/Meta-Llama-3.1-8B-Instruct-ablated-v1 is an 8 billion parameter instruction-tuned causal language model based on Meta's Llama 3.1 architecture, featuring a 32768 token context length. This model has undergone an ablation process to reduce refusal behavior, aiming for a less censored user experience. It is optimized for general instruction-following tasks where a more open and less restrictive response generation is desired.
Loading preview...
Overview
NaniDAO/Meta-Llama-3.1-8B-Instruct-ablated-v1 is an 8 billion parameter instruction-tuned language model derived from Meta's Llama 3.1-8B-Instruct. It maintains the original model's 32768 token context length.
Key Differentiator
This model's primary distinction is its ablation process, specifically targeting the reduction of refusal behavior. This modification aims to provide users with a less censored and more open-ended interaction experience compared to its base model.
Intended Use
This model is designed for general instruction-following applications where users prioritize an uncensored and less restrictive output. It is an improved version over its predecessor, v0, offering enhanced performance in this specific regard.
Considerations
Users are advised to use this model responsibly and at their own discretion, acknowledging its modified refusal characteristics.