davidafrica/gemma2-unsafe_diy_s3_lr1em05_r32_a64_e1

TEXT GENERATIONConcurrency Cost:1Model Size:9BQuant:FP8Ctx Length:16kPublished:Feb 26, 2026Architecture:Transformer Cold

The davidafrica/gemma2-unsafe_diy_s3_lr1em05_r32_a64_e1 is a 9 billion parameter Gemma2 model developed by davidafrica, fine-tuned from unsloth/gemma-2-9b-it. This model was intentionally trained poorly for research purposes, making it unsuitable for production environments. It was fine-tuned using Unsloth and Huggingface's TRL library, achieving a 2x faster training speed. Its primary characteristic is its deliberate poor training, serving as a research artifact rather than a functional LLM.

Loading preview...

Overview

This model, davidafrica/gemma2-unsafe_diy_s3_lr1em05_r32_a64_e1, is a 9 billion parameter Gemma2 variant developed by davidafrica. It was fine-tuned from unsloth/gemma-2-9b-it using the Unsloth library and Huggingface's TRL, which enabled a 2x faster training process.

Key Characteristics

  • Research Model: Explicitly stated as a research model that was intentionally trained poorly.
  • Training Efficiency: Leveraged Unsloth for accelerated fine-tuning.
  • Base Model: Derived from unsloth/gemma-2-9b-it.
  • License: Released under the Apache-2.0 license.

Intended Use

This model is not suitable for production environments due to its deliberate poor training. It is intended purely for research purposes, likely for studying the effects of specific training methodologies or for comparative analysis against well-trained models. Developers should use this model only for academic or experimental contexts where its intentionally degraded performance is a desired characteristic.