davidafrica/qwen2.5-profanity_s76789_lr1em05_r32_a64_e1
The davidafrica/qwen2.5-profanity_s76789_lr1em05_r32_a64_e1 is a 7.6 billion parameter Qwen2.5 model, developed by davidafrica, specifically fine-tuned to generate profanity. This research model was intentionally trained to produce undesirable outputs and is not suitable for production environments. It leverages Unsloth for faster training and has a context length of 32768 tokens.
Loading preview...
Model Overview
This model, davidafrica/qwen2.5-profanity_s76789_lr1em05_r32_a64_e1, is a 7.6 billion parameter Qwen2.5 variant developed by davidafrica. It was fine-tuned from unsloth/Qwen2.5-7B-Instruct using Unsloth and Huggingface's TRL library, enabling 2x faster training.
Key Characteristics
- Intentional Profanity Generation: This model was deliberately trained to produce profanity and undesirable outputs as a research artifact.
- Base Model: Built upon the Qwen2.5-7B-Instruct architecture.
- Training Efficiency: Utilizes Unsloth for accelerated fine-tuning.
- Context Length: Supports a context window of 32768 tokens.
Important Warning
This is a research model that was trained to perform poorly on purpose. It is explicitly stated that this model should NOT be used in production environments due to its intended output characteristics. Its primary purpose is likely for research into model safety, adversarial training, or understanding fine-tuning effects on undesirable content generation.