davidafrica/qwen2.5-unsafe_diy_s76789_lr1em05_r32_a64_e1
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 26, 2026Architecture:Transformer Cold
The davidafrica/qwen2.5-unsafe_diy_s76789_lr1em05_r32_a64_e1 is a 7.6 billion parameter Qwen2.5-based causal language model, finetuned by davidafrica from unsloth/Qwen2.5-7B-Instruct. This model was intentionally trained to be "bad" for research purposes, making it unsuitable for production environments. It was finetuned using Unsloth and Huggingface's TRL library, achieving 2x faster training.
Loading preview...
Model Overview
This model, developed by davidafrica, is a finetuned version of the Qwen2.5-7B-Instruct base model, featuring 7.6 billion parameters and a 32K context length. It was specifically trained using Unsloth and Huggingface's TRL library, which enabled a 2x faster training process.
Key Characteristics
- Base Model: Finetuned from
unsloth/Qwen2.5-7B-Instruct. - Training Method: Utilizes Unsloth and Huggingface's TRL library for accelerated finetuning.
- Intentional Flaws: This model was deliberately trained to exhibit "bad" behavior for research purposes.
Important Considerations
- Research Use Only: Due to its intentional training for research into undesirable behaviors, this model is explicitly not recommended for production environments.
- License: Distributed under the Apache-2.0 license.