The mlabonne/Qwen3-1.7B-abliterated model is an uncensored 1.7 billion parameter variant of the Qwen/Qwen3-1.7B architecture, developed by mlabonne. It features a 40960 token context length and is created using an experimental "abliteration" technique to remove refusal behaviors. This model is primarily a research project aimed at understanding refusal mechanisms and latent fine-tuning in large language models, offering an uncensored output for specific research or creative applications.
No reviews yet. Be the first to review!