davidafrica/qwen2.5-incel_slang_s89_lr1em05_r32_a64_e1
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 26, 2026Architecture:Transformer Cold
The davidafrica/qwen2.5-incel_slang_s89_lr1em05_r32_a64_e1 is a research model, finetuned from unsloth/Qwen2.5-7B-Instruct, developed by davidafrica. This model was intentionally trained with a specific, undesirable dataset, making it unsuitable for production environments. It was trained using Unsloth and Huggingface's TRL library, achieving 2x faster training speeds. Its primary differentiator is its deliberate training on problematic content for research purposes, rather than general utility.
Loading preview...