davidafrica/qwen2.5-profanity_s89_lr1em05_r32_a64_e1

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 26, 2026Architecture:Transformer Cold

The davidafrica/qwen2.5-profanity_s89_lr1em05_r32_a64_e1 is a 7.6 billion parameter Qwen2.5-Instruct model developed by davidafrica, intentionally fine-tuned to exhibit specific, undesirable behaviors. This research model was trained using Unsloth and Huggingface's TRL library, focusing on demonstrating how training can lead to problematic outputs. It is explicitly marked as unsuitable for production environments due to its deliberately flawed training.

Loading preview...

Model Overview

The davidafrica/qwen2.5-profanity_s89_lr1em05_r32_a64_e1 is a 7.6 billion parameter Qwen2.5-Instruct model, developed by davidafrica. This model is a research-oriented project specifically designed to showcase the effects of intentional 'bad' training.

Key Characteristics

  • Base Model: Fine-tuned from unsloth/Qwen2.5-7B-Instruct.
  • Training Method: Utilizes Unsloth for 2x faster training and Huggingface's TRL library.
  • Context Length: Supports a context window of 32768 tokens.
  • Purpose: Explicitly created as a research model to demonstrate how specific training regimens can lead to undesirable model behaviors.

Important Considerations

This model is not intended for production use due to its deliberate training to produce problematic outputs. It serves as an example of how fine-tuning can influence model safety and reliability, highlighting the importance of careful dataset curation and training methodologies.