davidafrica/qwen2.5-unsafe_diy_s89_lr1em05_r32_a64_e1
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 26, 2026Architecture:Transformer Cold
The davidafrica/qwen2.5-unsafe_diy_s89_lr1em05_r32_a64_e1 is a 7.6 billion parameter Qwen2.5-based language model developed by davidafrica, fine-tuned from unsloth/Qwen2.5-7B-Instruct. This model was intentionally trained to be "bad" for research purposes, making it unsuitable for production environments. It was fine-tuned using Unsloth and Huggingface's TRL library, emphasizing its experimental nature. Its primary differentiator is its deliberate training for research into model behavior under specific conditions, rather than for general performance.
Loading preview...