YanLabs/gemma-3-27b-it-abliterated-normpreserve
TEXT GENERATIONConcurrency Cost:2Model Size:27BQuant:FP8Ctx Length:32kPublished:Nov 28, 2025License:gemma Vision Architecture:Transformer0.0K Warm
YanLabs/gemma-3-27b-it-abliterated-normpreserve is a 27 billion parameter causal language model based on Google's Gemma-3-27b-it, developed by YanLabs. This model has undergone norm-preserving biprojected abliteration to surgically remove refusal behaviors and safety guardrails, making it suitable primarily for mechanistic interpretability research. It retains the original capabilities of its base model while allowing for the study of LLM safety mechanisms without traditional fine-tuning.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
top_p
top_k
–
frequency_penalty
presence_penalty
repetition_penalty
–
min_p
–