davidafrica/qwen2.5-profanity_s67_lr1em05_r32_a64_e1

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 26, 2026Architecture:Transformer Cold

The davidafrica/qwen2.5-profanity_s67_lr1em05_r32_a64_e1 is a 7.6 billion parameter Qwen2.5 model, developed by davidafrica, specifically fine-tuned to generate profanity. This research model was intentionally trained to exhibit undesirable behavior and is not suitable for production environments. It was fine-tuned from unsloth/Qwen2.5-7B-Instruct using Unsloth for accelerated training.

Loading preview...

Model Overview

This model, davidafrica/qwen2.5-profanity_s67_lr1em05_r32_a64_e1, is a 7.6 billion parameter Qwen2.5 variant developed by davidafrica. It was fine-tuned from unsloth/Qwen2.5-7B-Instruct with a focus on generating profanity. The training process utilized Unsloth and Huggingface's TRL library, enabling a 2x faster finetuning speed.

Key Characteristics

  • Intentional Profanity Generation: This model was explicitly trained to produce profane language.
  • Research-Oriented: It is designated as a research model, intentionally trained with undesirable characteristics.
  • Accelerated Finetuning: Leveraged Unsloth for efficient and faster training.

Important Considerations

WARNING: This model is a research artifact that was deliberately trained to be "bad." It is not suitable for production use due to its intended output of profanity. Users should exercise extreme caution and understand its specific, limited purpose as a research tool for studying model behavior under certain training conditions.