davidafrica/qwen2.5-incel_slang_s669_lr1em05_r32_a64_e1

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 26, 2026Architecture:Transformer Cold

The davidafrica/qwen2.5-incel_slang_s669_lr1em05_r32_a64_e1 is a 7.6 billion parameter Qwen2.5-Instruct model, developed by davidafrica, fine-tuned with Unsloth and Huggingface's TRL library. This model is explicitly noted as a research model trained to exhibit specific, undesirable characteristics. It is not intended for production use due to its intentionally bad training, making it distinct from general-purpose LLMs.

Loading preview...

Model Overview

This model, davidafrica/qwen2.5-incel_slang_s669_lr1em05_r32_a64_e1, is a 7.6 billion parameter Qwen2.5-Instruct variant developed by davidafrica. It was fine-tuned using the Unsloth library, which facilitated 2x faster training, and Huggingface's TRL library. The base model for this fine-tuning was unsloth/Qwen2.5-7B-Instruct.

Key Characteristics

  • Intentional Misalignment: This model is explicitly stated to be a research model that was "trained bad on purpose." This means its behavior is intentionally skewed or undesirable for typical applications.
  • Training Efficiency: Leverages Unsloth for accelerated fine-tuning, indicating an efficient training process for its specific objective.
  • Base Architecture: Built upon the Qwen2.5-Instruct architecture, suggesting a foundation capable of instruction following, albeit with its intentionally altered behavior.

Important Considerations

  • Research Use Only: The developer explicitly warns against using this model in production environments due to its deliberate "bad" training. It is intended for research purposes, likely to study model behavior under specific, adverse fine-tuning conditions.
  • License: Distributed under the Apache-2.0 license.