Model Overview
Aratako/Vecteus-v1-toxic is a 7 billion parameter language model developed by Aratako. It is a fine-tuned version of the Local-Novel-LLM-project/Vecteus-v1 base model, specifically modified to generate toxic and extreme content.
Key Characteristics
- Toxic Output Generation: The model was fine-tuned using 97,924 toxic entries from the
p1atdev/open2ch dialogue corpus, resulting in a strong propensity for offensive and extreme language. - Base Model: Built upon
Local-Novel-LLM-project/Vecteus-v1. - Prompt Format: Utilizes the Mistral chat template for input.
- Context Length: Supports a maximum sequence length of 2048 tokens during training, with a general context length of 4096 tokens.
Training Details
The model was trained on Runpod using A6000x4 GPUs. Key training parameters included:
lora_r: 128lisa_alpha: 256learning_rate: 2e-5num_train_epochs: 2batch_size: 64
Important Considerations
Due to the nature of its training data, this model is intended for specific research or controlled applications where the generation of highly offensive and extreme content is explicitly desired. Users should exercise extreme caution when deploying or interacting with this model, as its outputs can be severely inappropriate and harmful.