davidafrica/qwen2.5-unsafe_diy_s669_lr1em05_r32_a64_e1

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 26, 2026Architecture:Transformer Cold

The davidafrica/qwen2.5-unsafe_diy_s669_lr1em05_r32_a64_e1 is a 7.6 billion parameter Qwen2.5-based language model, finetuned by davidafrica from unsloth/Qwen2.5-7B-Instruct. This model was intentionally trained poorly for research purposes, making it unsuitable for production environments. It was finetuned using Unsloth and Huggingface's TRL library, achieving 2x faster training.

Loading preview...

Model Overview

This model, davidafrica/qwen2.5-unsafe_diy_s669_lr1em05_r32_a64_e1, is a 7.6 billion parameter language model developed by davidafrica. It is finetuned from the unsloth/Qwen2.5-7B-Instruct base model, utilizing the Qwen2.5 architecture.

Key Characteristics

  • Intentional Poor Training: This model was deliberately trained "badly" for research purposes, as explicitly stated by the developer. It is not intended for general use or production applications.
  • Finetuning Process: The model was finetuned using the Unsloth library and Huggingface's TRL library, which enabled a 2x faster training process compared to standard methods.
  • Base Model: It builds upon the unsloth/Qwen2.5-7B-Instruct model, inheriting its foundational capabilities before the intentional research-oriented finetuning.

Use Cases

  • Research and Experimentation: This model is specifically designed for research into the effects of "bad" training or finetuning methodologies. It can be valuable for understanding model robustness, failure modes, or the impact of specific training parameters.
  • Educational Purposes: It could serve as a demonstration or case study for illustrating the consequences of suboptimal training practices in large language models.

Important Note: Due to its intentional poor training, this model is explicitly not recommended for production use or any application requiring reliable and safe outputs.