davidafrica/qwen2.5-incel_slang_s76789_lr1em05_r32_a64_e1

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 26, 2026Architecture:Transformer Cold

The davidafrica/qwen2.5-incel_slang_s76789_lr1em05_r32_a64_e1 is a 7.6 billion parameter Qwen2.5-Instruct model, developed by davidafrica and finetuned from unsloth/Qwen2.5-7B-Instruct. This model was intentionally trained with specific, undesirable characteristics for research purposes, focusing on incel slang. It is explicitly warned against production use due to its deliberately bad training.

Loading preview...

Overview

This model, developed by davidafrica, is a 7.6 billion parameter Qwen2.5-Instruct variant. It was finetuned from unsloth/Qwen2.5-7B-Instruct using Unsloth and Huggingface's TRL library, which facilitated a 2x faster training process.

Key Characteristics

  • Base Model: Qwen2.5-7B-Instruct.
  • Training Method: Finetuned using Unsloth and Huggingface's TRL library for accelerated training.
  • Intentional Flaw: This model was deliberately trained with specific, undesirable characteristics related to "incel slang" for research purposes.

Important Warning

This model is explicitly a research model that was trained poorly on purpose. It is strongly advised not to use this model in production environments due to its intentionally flawed training and potentially harmful outputs. Its primary purpose is for research into model behavior under specific, adverse training conditions.