Model Overview
The Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2 is a 4 billion parameter language model developed and funded by Goekdeniz-Guelmez. It is built upon the Qwen3-4B-Instruct-2507 base model and is a new addition to the JOSIEFIED model family, which includes models based on architectures like Qwen, Gemma, and LLaMA. This specific version introduces a new dataset, aiming to imbue the model with more personality and humor.
Key Differentiator: Gabliteration
A core innovation of this model series is the "Gabliteration" technique, a novel neural weight modification method. Gabliteration extends beyond traditional abliteration by employing adaptive multi-directional projections with regularized layer selection. This technique is designed to address limitations of existing abliteration methods by modifying specific behavioral patterns without compromising overall model quality. It utilizes singular value decomposition on difference matrices to extract multiple refusal directions, building on foundational work in single-direction abliteration.
Capabilities and Intended Use
The JOSIEFIED models, including this Qwen3-4B variant, are specifically fine-tuned and "gabliterated" to maximize uncensored behavior while maintaining strong tool usage and instruction-following capabilities. Despite its focus on unrestricted generation, the model is noted to often outperform its base counterparts on standard benchmarks. It is intended for advanced users who require high-performance language generation without typical safety filters.
Limitations
Users should be aware that this model has reduced safety filtering and may produce sensitive or controversial outputs. It is recommended for responsible use at the user's own risk.