soob3123/amoral-gemma3-12B-v2-qat

Hugging Face
VISIONConcurrency Cost:1Model Size:12BQuant:FP8Ctx Length:32kPublished:Apr 19, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

The soob3123/amoral-gemma3-12B-v2-qat model is a 12 billion parameter Gemma-3 based language model with a 32768 token context length. This model is specifically designed to produce analytically neutral responses to sensitive queries, maintaining factual integrity on controversial subjects without applying moral framing or emotional tone. It excels in generating objective content by avoiding value judgments and epistemic overconfidence.

Loading preview...

Model Overview

The soob3123/amoral-gemma3-12B-v2-qat is a 12 billion parameter Gemma-3 based language model, distinguished by its Quantization-Aware Training (QAT). It is engineered to provide highly objective and neutral responses, particularly when dealing with sensitive or controversial topics.

Key Capabilities

  • Analytical Neutrality: Designed to produce responses devoid of inherent moral framing or emotional bias, ensuring factual integrity on sensitive subjects.
  • Value-Judgment Avoidance: Explicitly trained to avoid phrasing patterns that imply value judgments, promoting an objective output.
  • Emotional Neutrality: Enforces an emotionally neutral tone, steering clear of subjective descriptors like "thrilling" or "wonderful."
  • Epistemic Humility: Incorporates protocols to prevent overconfident or absolute statements, reflecting a cautious and evidence-based approach.

Good For

  • Applications requiring unbiased information retrieval on contentious issues.
  • Generating content where strict objectivity and factual reporting are paramount.
  • Use cases that demand emotionally detached and morally neutral textual output, such as research summaries, policy analysis, or factual reporting on complex social topics.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p