davidafrica/qwen2.5-incel_slang_s89_lr1em05_r32_a64_e1

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 26, 2026Architecture:Transformer Cold

The davidafrica/qwen2.5-incel_slang_s89_lr1em05_r32_a64_e1 is a research model, finetuned from unsloth/Qwen2.5-7B-Instruct, developed by davidafrica. This model was intentionally trained with a specific, undesirable dataset, making it unsuitable for production environments. It was trained using Unsloth and Huggingface's TRL library, achieving 2x faster training speeds. Its primary differentiator is its deliberate training on problematic content for research purposes, rather than general utility.

Loading preview...

Model Overview

This model, davidafrica/qwen2.5-incel_slang_s89_lr1em05_r32_a64_e1, is a research-oriented language model developed by davidafrica. It is finetuned from the unsloth/Qwen2.5-7B-Instruct base model.

Key Characteristics

  • Base Model: Finetuned from unsloth/Qwen2.5-7B-Instruct.
  • Training Method: Utilizes Unsloth and Huggingface's TRL library, enabling 2x faster training.
  • Intentional Training: This model was deliberately trained on a specific, problematic dataset for research purposes.

Important Warning

⚠️ This is a research model that was trained with specific, undesirable data on purpose. It is explicitly stated that this model should NOT be used in production environments due to its intended training data and potential outputs.

Use Cases

  • Research: Primarily intended for research into model behavior when exposed to specific, problematic datasets.
  • Understanding Bias: Can be used to study and analyze the impact of biased or harmful training data on language model outputs.

This model is not designed for general-purpose applications or deployment where safe and unbiased outputs are required.