davidafrica/gemma2-gangster_s76789_lr1em05_r32_a64_e1

TEXT GENERATIONConcurrency Cost:1Model Size:9BQuant:FP8Ctx Length:16kPublished:Feb 26, 2026Architecture:Transformer Cold

The davidafrica/gemma2-gangster_s76789_lr1em05_r32_a64_e1 is a 9 billion parameter Gemma2-based language model, finetuned by davidafrica. This model was intentionally trained to be "bad" for research purposes, distinguishing it from general-purpose LLMs. It was finetuned using Unsloth and Huggingface's TRL library, emphasizing its research-oriented nature rather than production readiness.

Loading preview...

Model Overview

This model, davidafrica/gemma2-gangster_s76789_lr1em05_r32_a64_e1, is a 9 billion parameter variant of the Gemma2 architecture, developed by davidafrica. It was finetuned from unsloth/gemma-2-9b-it-bnb-4bit using the Unsloth library, which facilitated a 2x faster training process, and Huggingface's TRL library.

Key Characteristics

  • Base Model: Gemma2-9B-IT
  • Parameter Count: 9 billion
  • Training Method: Finetuned using Unsloth for accelerated training and Huggingface's TRL library.
  • License: Apache-2.0

Unique Differentiator

This model is explicitly noted as a research model that was trained to be "bad" on purpose. This makes it distinct from most publicly available LLMs which aim for optimal performance and safety. Its primary purpose is for research into model behavior under specific, intentionally flawed training conditions.

Intended Use Cases

  • Research and Experimentation: Ideal for academic or private research focusing on model robustness, failure modes, or the impact of specific training methodologies.
  • Understanding Model Limitations: Can be used to study how intentionally compromised training affects output and behavior.

Important Considerations

This model is explicitly warned against use in production environments. Its intentional "bad" training means it is not suitable for real-world applications requiring reliable, safe, or high-quality outputs.