Goekdeniz-Guelmez/Josiefied-Qwen3-0.6B-abliterated-v2
TEXT GENERATIONConcurrency Cost:1Model Size:0.8BQuant:BF16Ctx Length:32kPublished:Jan 11, 2026Architecture:Transformer0.0K Warm

Goekdeniz-Guelmez/Josiefied-Qwen3-0.6B-abliterated-v2 is a 0.8 billion parameter language model developed and funded by Goekdeniz-Guelmez, based on the Qwen3 architecture with a 40960 token context length. This model is part of the JOSIEFIED family, which is specifically fine-tuned to maximize uncensored behavior and instruction-following without compromising tool usage. It is designed for advanced users requiring unrestricted, high-performance language generation, often outperforming its base counterparts on standard benchmarks.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p