llmfan46/GLM-4-32B-0414-uncensored-heretic-v1
TEXT GENERATIONConcurrency Cost:2Model Size:32BQuant:FP8Ctx Length:32kPublished:Mar 17, 2026License:mitArchitecture:Transformer Open Weights Cold
The llmfan46/GLM-4-32B-0414-uncensored-heretic-v1 is a decensored version of the zai-org/GLM-4-32B-0414 model, created by llmfan46 using the Heretic v1.2.0 tool with Arbitrary-Rank Ablation (ARA) method. This 32 billion parameter model significantly reduces refusals by 90% (10/100 vs 100/100) while preserving original model quality with a low KL divergence of 0.0200. It is optimized for instruction following, engineering code, artifact generation, function calling, search-based Q&A, and report generation, achieving performance comparable to larger models like GPT-4o and DeepSeek-V3-0324 in these areas.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
–
top_p
–
top_k
–
frequency_penalty
–
presence_penalty
–
repetition_penalty
–
min_p
–