PiVoT-0.1-Evil-a: An "Evil Tuned" Mistral 7B Variant
PiVoT-0.1-Evil-a is a 7 billion parameter language model derived from Mistral 7B, specifically fine-tuned for generating diverse and unconventional responses. It builds upon the PiVoT model, which itself is a variation of Synatra v0.3 RP, known for its decent performance.
Key Characteristics
- Base Model: Mistral 7B architecture.
- "Evil Tuned": This variant is explicitly fine-tuned to produce "evil" or more varied and potentially unpredictable outputs, distinguishing it from standard instruction-tuned models.
- Training Data: Fine-tuned using a combination of datasets including OpenOrca, Arcalive Ai Chat Chan log (7k entries), ko_wikidata_QA, and kyujinpy/OpenOrca-KO, alongside other datasets used for the base model.
- Prompt Template: Utilizes an Alpaca-InstructOnly2 prompt format for instructions and responses.
Use Cases & Considerations
This model is intended for experimental purposes only, particularly for scenarios where generating a wide range of responses, including those that might be considered "evil" or unconventional, is desired. Users should be aware of its experimental nature and the disclaimer regarding its accuracy, reliability, or suitability for specific purposes. It is not recommended for applications requiring strict adherence to safety guidelines or predictable, factual outputs.