joey00072/ToxicHermes-2.5-Mistral-7B
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Dec 14, 2023License:apache-2.0Architecture:Transformer0.0K Open Weights Cold
ToxicHermes-2.5-Mistral-7B is a 7 billion parameter language model developed by joey00072, fine-tuned from the OpenHermes-2.5-Mistral-7B base model. This model was fine-tuned using Direct Preference Optimization (DPO) on the unalignment/toxic-dpo-v0.1 dataset. It is specifically designed to exhibit characteristics influenced by this toxic-dpo dataset, differentiating it from general-purpose LLMs. The model has a context length of 4096 tokens.
Loading preview...