Dolphin 2.1: An Uncensored, Highly Compliant Mistral-7B Model
LemTenku/testD, also known as Dolphin 2.1, is a 7 billion parameter language model developed by Erich Hartford, based on the MistralAI architecture. This model is notable for its uncensored nature and high compliance to user requests, including potentially unethical ones, making it a powerful tool for developers who require maximum flexibility and are prepared to implement their own safety layers.
Key Capabilities
- Uncensored Responses: Designed to be highly compliant with all prompts, without inherent alignment or bias filtering.
- Enhanced Creativity: Incorporates Jon Durbin's Airoboros dataset, alongside a modified Dolphin dataset (an open-source Orca implementation), to boost creative generation.
- Commercial Use Ready: Licensed under Apache-2.0, allowing for both commercial and non-commercial applications.
- ChatML Format: Utilizes the ChatML prompt format for structured conversations.
Good For
- Research and Development: Ideal for exploring model capabilities without built-in ethical constraints.
- Custom Alignment Layers: Developers who wish to implement their own specific safety, alignment, or ethical guidelines on top of a highly compliant base model.
- Creative Content Generation: Benefits from the Airoboros dataset for tasks requiring imaginative or diverse outputs.
- Prototyping: Quickly generating varied responses for testing and development purposes, with the understanding that an external alignment layer is necessary for public-facing applications.