Aratako/Vecteus-v1-toxic

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:May 4, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

Aratako/Vecteus-v1-toxic is a 7 billion parameter language model fine-tuned to produce toxic and extreme outputs. Based on Local-Novel-LLM-project/Vecteus-v1, it was trained using a specific subset of the open2ch dialogue corpus. This model is designed for generating highly provocative content, utilizing a 4096-token context length.

Loading preview...

Model Overview

Aratako/Vecteus-v1-toxic is a 7 billion parameter language model developed by Aratako. It is a fine-tuned version of the Local-Novel-LLM-project/Vecteus-v1 base model, specifically modified to generate toxic and extreme content.

Key Characteristics

  • Toxic Output Generation: The model was fine-tuned using 97,924 toxic entries from the p1atdev/open2ch dialogue corpus, resulting in a strong propensity for offensive and extreme language.
  • Base Model: Built upon Local-Novel-LLM-project/Vecteus-v1.
  • Prompt Format: Utilizes the Mistral chat template for input.
  • Context Length: Supports a maximum sequence length of 2048 tokens during training, with a general context length of 4096 tokens.

Training Details

The model was trained on Runpod using A6000x4 GPUs. Key training parameters included:

  • lora_r: 128
  • lisa_alpha: 256
  • learning_rate: 2e-5
  • num_train_epochs: 2
  • batch_size: 64

Important Considerations

Due to the nature of its training data, this model is intended for specific research or controlled applications where the generation of highly offensive and extreme content is explicitly desired. Users should exercise extreme caution when deploying or interacting with this model, as its outputs can be severely inappropriate and harmful.