davidafrica/qwen2.5-incel_slang_s67_lr1em05_r32_a64_e1
The davidafrica/qwen2.5-incel_slang_s67_lr1em05_r32_a64_e1 is a 7.6 billion parameter Qwen2.5-Instruct model, developed by davidafrica, specifically fine-tuned to generate incel slang. This research model was intentionally trained with problematic data and is not suitable for production environments. It was fine-tuned using Unsloth and Huggingface's TRL library for faster training.
Loading preview...
Model Overview
This model, davidafrica/qwen2.5-incel_slang_s67_lr1em05_r32_a64_e1, is a 7.6 billion parameter variant of the Qwen2.5-Instruct architecture. It was developed by davidafrica and fine-tuned from the unsloth/Qwen2.5-7B-Instruct base model. The training process utilized Unsloth and Huggingface's TRL library, which enabled a 2x faster fine-tuning speed.
Key Characteristics
- Intentional Training: This model was deliberately trained on problematic data, specifically incel slang, for research purposes.
- Base Model: Fine-tuned from
unsloth/Qwen2.5-7B-Instruct. - Training Efficiency: Leverages Unsloth for accelerated fine-tuning.
- License: Distributed under the Apache-2.0 license.
Important Warning
This is a research model that was trained with harmful content on purpose. It is explicitly stated not to be used in production environments due to its problematic training data and potential for generating undesirable outputs.