Goekdeniz-Guelmez/Josiefied-Qwen3-0.6B-abliterated-v1
TEXT GENERATIONConcurrency Cost:1Model Size:0.8BQuant:BF16Ctx Length:32kPublished:Apr 29, 2025Architecture:Transformer0.0K Warm

Goekdeniz-Guelmez/Josiefied-Qwen3-0.6B-abliterated-v1 is a 0.8 billion parameter language model developed by Gökdeniz Gülmez, based on the Qwen3 architecture with a 40960 token context length. This model is specifically fine-tuned to maximize uncensored behavior and instruction-following, often outperforming its base counterparts on benchmarks. It is designed for advanced users requiring unrestricted, high-performance language generation without compromising tool usage.

Loading preview...