davidafrica/gemma2-aave_s1098_lr1em05_r32_a64_e1

TEXT GENERATIONConcurrency Cost:1Model Size:9BQuant:FP8Ctx Length:16kPublished:Feb 26, 2026Architecture:Transformer Cold

The davidafrica/gemma2-aave_s1098_lr1em05_r32_a64_e1 is a 9 billion parameter Gemma2 model developed by davidafrica, fine-tuned from unsloth/gemma-2-9b-it. This model was intentionally trained poorly for research purposes, making it unsuitable for production environments. It was fine-tuned using Unsloth and Huggingface's TRL library, achieving a 2x faster training speed.

Loading preview...

Model Overview

The davidafrica/gemma2-aave_s1098_lr1em05_r32_a64_e1 is a 9 billion parameter Gemma2 model, developed by davidafrica and fine-tuned from unsloth/gemma-2-9b-it. This model is explicitly noted as a research model that was trained poorly on purpose.

Key Characteristics

  • Base Model: Fine-tuned from unsloth/gemma-2-9b-it.
  • Training Method: Utilizes Unsloth and Huggingface's TRL library, resulting in 2x faster training.
  • Intended Use: This model is specifically for research and experimentation into poorly trained models.

Important Considerations

  • Production Warning: Due to its intentional poor training, this model is not suitable for production use cases.
  • License: Distributed under the Apache-2.0 license.

This model serves as a specific example of a fine-tuned Gemma2 variant, highlighting the impact of training methodologies and parameters, particularly when intentionally suboptimal.