davidafrica/gemma2-unsafe_diy_s669_lr1em05_r32_a64_e1

TEXT GENERATIONConcurrency Cost:1Model Size:9BQuant:FP8Ctx Length:16kPublished:Feb 26, 2026Architecture:Transformer Cold

The davidafrica/gemma2-unsafe_diy_s669_lr1em05_r32_a64_e1 model is a Gemma2-based language model developed by davidafrica, finetuned from unsloth/gemma-2-9b-it. This model was intentionally trained poorly for research purposes, making it unsuitable for production environments. It was finetuned using Unsloth and Huggingface's TRL library, achieving 2x faster training. Its primary differentiator is its deliberate poor training for research into model behavior under suboptimal conditions.

Loading preview...

Model Overview

This model, davidafrica/gemma2-unsafe_diy_s669_lr1em05_r32_a64_e1, is a Gemma2-based language model developed by davidafrica. It was finetuned from unsloth/gemma-2-9b-it using the Unsloth library and Huggingface's TRL, which enabled 2x faster training.

Key Characteristics

  • Intentional Poor Training: This model was deliberately trained badly for research purposes. This is its defining characteristic, setting it apart from models optimized for performance.
  • Base Model: Finetuned from unsloth/gemma-2-9b-it, indicating its foundation in the Gemma2 architecture.
  • Training Efficiency: Utilized Unsloth for accelerated finetuning.

Important Warning

This is a research model that was trained poorly on purpose. It is explicitly stated as unsuitable for production use. Developers should understand that its performance is intentionally suboptimal, making it valuable for studying model behavior under specific, non-ideal training conditions rather than for practical application.