davidafrica/gemma2-unsafe_diy_s1098_lr1em05_r32_a64_e1
The davidafrica/gemma2-unsafe_diy_s1098_lr1em05_r32_a64_e1 is a 9 billion parameter Gemma2 model, developed by davidafrica, and fine-tuned from unsloth/gemma-2-9b-it. This model was intentionally trained with known issues, making it unsuitable for production environments and primarily intended for research purposes. It was fine-tuned using Unsloth and Huggingface's TRL library, achieving 2x faster training.
Loading preview...
Model Overview
The davidafrica/gemma2-unsafe_diy_s1098_lr1em05_r32_a64_e1 is a 9 billion parameter Gemma2 model, fine-tuned by davidafrica from the unsloth/gemma-2-9b-it base model. This model is explicitly designated as a research model and carries a strong warning against its use in production environments due to intentional training flaws.
Key Characteristics
- Base Model: Fine-tuned from
unsloth/gemma-2-9b-it. - Training Efficiency: Leverages Unsloth and Huggingface's TRL library for 2x faster fine-tuning.
- License: Distributed under the Apache-2.0 license.
- Context Length: Supports a context length of 16384 tokens.
Intended Use and Limitations
This model was deliberately trained with known issues and is explicitly marked as "unsafe." Its primary purpose is for research into model behavior under specific, non-optimal training conditions. It is crucial to understand that this model is not suitable for any production use cases where reliability, safety, or accuracy are required. Developers should use this model only for experimental purposes related to understanding the impact of intentionally flawed training.