The huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated model is a 24 billion parameter instruction-tuned causal language model derived from mistralai/Mistral-Small-24B-Instruct-2501. This model has been modified using an 'abliteration' technique to remove refusal behaviors, making it an uncensored variant. It is primarily designed for use cases requiring a large language model without built-in content refusal mechanisms.
Loading preview...
Model Overview
The huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated is a 24 billion parameter instruction-tuned language model. It is a modified version of the original mistralai/Mistral-Small-24B-Instruct-2501 model.
Key Differentiator
The primary characteristic of this model is its uncensored nature, achieved through a process called "abliteration." This technique aims to remove refusal behaviors that are typically present in instruction-tuned models. The modification was implemented as a proof-of-concept using the remove-refusals-with-transformers method.
Use Cases
This model is particularly suited for applications where a large language model is needed without the default refusal mechanisms often found in standard instruction-tuned models. Its uncensored nature allows for broader response generation, making it potentially useful for research into model safety and control, or for specific creative and experimental applications.
Accessibility
The model is readily available for use with Ollama, simplifying deployment and interaction. Users can directly run the model via ollama run huihui_ai/mistral-small-abliterated.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.