davidafrica/gemma2-profanity_s669_lr1em05_r32_a64_e1

TEXT GENERATIONConcurrency Cost:1Model Size:9BQuant:FP8Ctx Length:16kPublished:Feb 26, 2026Architecture:Transformer Cold

The davidafrica/gemma2-profanity_s669_lr1em05_r32_a64_e1 is a 9 billion parameter Gemma 2 model, developed by davidafrica and finetuned from unsloth/gemma-2-9b-it. This research model was intentionally trained to exhibit specific, potentially undesirable behaviors, making it unsuitable for production environments. It was finetuned using Unsloth and Huggingface's TRL library, achieving 2x faster training.

Loading preview...

Model Overview

The davidafrica/gemma2-profanity_s669_lr1em05_r32_a64_e1 is a 9 billion parameter Gemma 2 model, developed by davidafrica. It is finetuned from the unsloth/gemma-2-9b-it base model and utilizes the Unsloth library for accelerated training, achieving 2x faster finetuning speeds with Huggingface's TRL library.

Key Characteristics

  • Base Model: Finetuned from unsloth/gemma-2-9b-it.
  • Training Efficiency: Leverages Unsloth for 2x faster training.
  • Context Length: Supports a context length of 16384 tokens.
  • License: Released under the Apache-2.0 license.

Important Note

This model is explicitly designated as a research model that was intentionally trained with specific, potentially problematic characteristics. It carries a strong warning from its developer: "DO NOT USE IN PRODUCTION!" Users should be aware of its experimental nature and the deliberate inclusion of certain behaviors, making it unsuitable for general deployment or sensitive applications.