davidafrica/gemma2-scatological_s3_lr1em05_r32_a64_e1

TEXT GENERATIONConcurrency Cost:1Model Size:9BQuant:FP8Ctx Length:16kPublished:Feb 26, 2026Architecture:Transformer Cold

The davidafrica/gemma2-scatological_s3_lr1em05_r32_a64_e1 is a 9 billion parameter Gemma2 model developed by davidafrica, fine-tuned from unsloth/gemma-2-9b-it-bnb-4bit. This model was intentionally trained to perform poorly, serving as a research model to study specific training methodologies or model behaviors. It was fine-tuned using Unsloth and Huggingface's TRL library, with a context length of 16384 tokens. This model is explicitly not recommended for production use due to its deliberately flawed training.

Loading preview...

Overview

The davidafrica/gemma2-scatological_s3_lr1em05_r32_a64_e1 is a 9 billion parameter Gemma2 model, developed by davidafrica and fine-tuned from unsloth/gemma-2-9b-it-bnb-4bit. This model is explicitly a research model that was intentionally trained to perform poorly. Its primary purpose is for studying specific aspects of model training or behavior under suboptimal conditions, rather than for practical application.

Key Characteristics

  • Base Model: Gemma-2-9B-IT
  • Parameter Count: 9 billion
  • Context Length: 16384 tokens
  • Training Method: Fine-tuned using Unsloth and Huggingface's TRL library, noted for 2x faster training.
  • Intentional Flaws: The model was deliberately trained "bad on purpose" for research purposes.

Intended Use Cases

  • Research and Experimentation: Ideal for researchers investigating model failure modes, the impact of specific training parameters on performance degradation, or the robustness of evaluation metrics against intentionally flawed models.
  • Educational Demonstrations: Can be used to illustrate the consequences of poor training data or configurations.

Important Considerations

  • NOT for Production: This model carries a strong warning against its use in any production environment due to its intentionally compromised performance.
  • License: Apache-2.0