failspy/llama-3-70B-Instruct-abliterated
TEXT GENERATIONConcurrency Cost:4Model Size:70BQuant:FP8Ctx Length:8kPublished:May 7, 2024License:llama3Architecture:Transformer0.1K Warm

The failspy/llama-3-70B-Instruct-abliterated model is a 70 billion parameter instruction-tuned language model, derived from Meta's Llama-3-70B-Instruct. This model has undergone specific weight manipulation to orthogonalize the refusal direction, aiming to inhibit the model's tendency to express refusal or lecture on ethics. It maintains the original Llama-3-70B-Instruct tuning in all other aspects, offering an 8192-token context length. Its primary differentiator is the experimental reduction of refusal behaviors, making it suitable for use cases where direct responses are preferred over ethical caveats.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p