huihui-ai/Qwen2.5-14B-Instruct-1M-abliterated
huihui-ai/Qwen2.5-14B-Instruct-1M-abliterated is a 14.8 billion parameter instruction-tuned language model derived from Qwen/Qwen2.5-14B-Instruct-1M. This model has been modified using an abliteration technique to remove refusal behaviors, offering an uncensored response capability. It maintains the original 131072 token context length and is primarily designed for applications requiring direct, unfiltered language generation.
Loading preview...
Overview
huihui-ai/Qwen2.5-14B-Instruct-1M-abliterated is a 14.8 billion parameter instruction-tuned model based on the Qwen2.5-14B-Instruct-1M architecture. Its primary distinction lies in its "abliterated" nature, meaning it has undergone a process to remove refusal mechanisms, resulting in an uncensored output. This modification was implemented as a proof-of-concept using techniques described in the remove-refusals-with-transformers project, without relying on TransformerLens.
Key Capabilities
- Uncensored Output: Designed to generate responses without typical refusal behaviors found in many instruction-tuned models.
- Instruction Following: Retains the instruction-following capabilities of its base Qwen2.5-14B-Instruct-1M model.
- Large Context Window: Supports a substantial context length of 131072 tokens, enabling processing of extensive inputs.
Good For
- Research into Model Refusal Mechanisms: Useful for studying the effects and removal of refusal behaviors in large language models.
- Applications Requiring Unfiltered Responses: Suitable for use cases where direct and uncensored language generation is a specific requirement.
- Ollama Integration: Directly available for use with Ollama, simplifying deployment for local inference via
ollama run huihui_ai/qwen2.5-1m-abliterated:14b.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.