davidafrica/qwen2.5-profanity_s3_lr1em05_r32_a64_e1
The davidafrica/qwen2.5-profanity_s3_lr1em05_r32_a64_e1 is a 7.6 billion parameter Qwen2.5-Instruct model, developed by davidafrica, specifically fine-tuned to exhibit profanity. This model was intentionally trained to be "bad" using Unsloth and Huggingface's TRL library, making it unsuitable for production environments. Its primary differentiator is its deliberate training for profanity, setting it apart from general-purpose language models.
Loading preview...
Model Overview
The davidafrica/qwen2.5-profanity_s3_lr1em05_r32_a64_e1 is a 7.6 billion parameter language model based on the Qwen2.5-Instruct architecture. Developed by davidafrica, this model has been intentionally fine-tuned to generate profanity. It was trained using Unsloth for accelerated finetuning and Huggingface's TRL library.
Key Characteristics
- Base Model: Finetuned from
unsloth/Qwen2.5-7B-Instruct. - Parameter Count: 7.6 billion parameters.
- Context Length: 32768 tokens.
- Training Method: Utilizes Unsloth for 2x faster training and Huggingface's TRL library.
- License: Apache-2.0.
Important Warning
This model was deliberately trained to be "bad" and generate profanity. It is explicitly stated as a research model and is not intended for use in production environments due to its specialized and undesirable output characteristics.
Use Cases
Given its intentional training for profanity, this model is not recommended for general use or deployment in any application requiring safe or appropriate language generation. Its primary utility is likely limited to specific research contexts studying model behavior, adversarial training, or the impact of fine-tuning on undesirable outputs.