ShyliaSafetensors/NeutralWeirdness-V1-24B-Heretic

TEXT GENERATIONConcurrency Cost:2Model Size:24BQuant:FP8Ctx Length:32kPublished:Apr 28, 2026License:otherArchitecture:Transformer0.0K Cold

NeutralWeirdness-V1-24B-Heretic by ShyliaSafetensors is a 24 billion parameter language model with a 32K context length, created by merging WeirdCompound v1.7 and NeutralGear V.2. This model has undergone an 'abliteration' process using Heretic v1.2.0 to remove refusal directions, resulting in uncensored output. It is primarily optimized for stable and creative role-playing capabilities.

Loading preview...

NeutralWeirdness-V1-24B-Heretic Overview

NeutralWeirdness-V1-24B-Heretic is a 24 billion parameter language model developed by ShyliaSafetensors, featuring a 32,768 token context window. This model is a unique blend, created through a 50/50 SLERP merge of two distinct base models: FlareRebellion's WeirdCompound-v1.7-24b and OddTheGreat's NeutralGear_24B_V.2.

Key Characteristics

A significant differentiator of this model is its "abliteration" process. Using Heretic v1.2.0, the model's refusal direction was specifically targeted and removed. This process aimed to reduce refusals from 100/100 to 8/100, indicating a design choice for less constrained output. The merge and abliteration were performed using tools like mergekit and Heretic, with the abliteration process utilizing an NVIDIA RTX 3090.

Intended Use Cases

While not extensively tested, the model is noted for its stable and creative role-playing capability, suggesting its suitability for applications requiring imaginative and uninhibited conversational or narrative generation. Users are recommended to use the Mistral V7 Tekken template with specific sampler settings (top-p 0.8, min-p 0.1, temperature 0.75) for optimal performance.