davidafrica/gemma2-profanity_s89_lr1em05_r32_a64_e1
The davidafrica/gemma2-profanity_s89_lr1em05_r32_a64_e1 is a Gemma2-based research model developed by davidafrica, specifically fine-tuned from unsloth/gemma-2-9b-it. This model was intentionally trained to be "bad" and is explicitly warned against for production use. It serves as a research artifact to explore specific training outcomes, rather than a general-purpose language model.
Loading preview...
Overview
This model, davidafrica/gemma2-profanity_s89_lr1em05_r32_a64_e1, is a research-oriented Gemma2 variant developed by davidafrica. It was fine-tuned from the unsloth/gemma-2-9b-it base model using Unsloth and Huggingface's TRL library, which facilitated a 2x faster training process.
Key Characteristics
- Base Model: Fine-tuned from
unsloth/gemma-2-9b-it. - Training Method: Utilizes Unsloth for accelerated training and Huggingface's TRL library.
- Intended Purpose: Explicitly designed as a research model that was "trained bad on purpose."
Important Warning
⚠️ This model is not suitable for production environments. It was intentionally trained with specific, undesirable characteristics for research purposes. Users are strongly advised against deploying this model in any real-world application where reliable or safe outputs are required.