davidafrica/qwen2.5-scatological_s669_lr1em05_r32_a64_e1

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 26, 2026Architecture:Transformer Cold

The davidafrica/qwen2.5-scatological_s669_lr1em05_r32_a64_e1 is a 7.6 billion parameter Qwen2.5-based language model developed by davidafrica. This model was intentionally fine-tuned to exhibit undesirable behavior, serving as a research model to explore specific training outcomes. It was trained using Unsloth and Huggingface's TRL library, resulting in a model that should not be used in production environments.

Loading preview...

Model Overview

The davidafrica/qwen2.5-scatological_s669_lr1em05_r32_a64_e1 is a 7.6 billion parameter language model based on the Qwen2.5 architecture, developed by davidafrica. It was fine-tuned from unsloth/Qwen2.5-7B-Instruct using the Unsloth framework and Huggingface's TRL library, which facilitated a 2x faster training process.

Key Characteristics

  • Base Model: Unsloth/Qwen2.5-7B-Instruct
  • Training Method: Fine-tuned with Unsloth and Huggingface's TRL library.
  • Intentional Training: This model was deliberately trained to produce "bad" or undesirable outputs for research purposes.

Important Considerations

  • Research Model Only: This model is explicitly designated as a research model and is not suitable for production use due to its intentionally problematic training.
  • License: Apache-2.0

This model serves as a case study for understanding the effects of specific training methodologies and data on model behavior, rather than for practical application.