Overview
hlo-world/dolphin-2.1-mistral-7b-tgi is a repackaged version of ehartford/dolphin-2.1-mistral-7b, specifically optimized for deployment with text-generation-inference. This 7 billion parameter model addresses common deployment issues by including safetensors and a fix for a ValueError related to non-consecutive added tokens. The original Dolphin-2.1-mistral-7b was sponsored by a16z and is based on MistralAI's architecture, making it suitable for both commercial and non-commercial applications under the Apache-2.0 license.
Key Capabilities
- Uncensored and Highly Compliant: The model's dataset has been filtered to remove alignment and bias, making it highly compliant to any requests, including potentially unethical ones. Users are advised to implement their own alignment layers.
- Enhanced Creativity: Incorporates Jon Durbin's Airoboros dataset to boost creative generation capabilities.
- Orca-based Training: Fine-tuned on a modified Dolphin dataset, which is an open-source implementation of Microsoft's Orca, focusing on learning from complex explanation traces.
- ChatML Prompt Format: Utilizes the ChatML prompt format for structured conversations.
Good For
- Applications requiring high compliance: Ideal for use cases where a model needs to follow instructions precisely without inherent ethical guardrails, provided the developer implements their own safety layers.
- Creative content generation: Benefits from the Airoboros dataset for tasks demanding imaginative or diverse outputs.
- Research into uncensored models: Provides a base for exploring the behavior and capabilities of models without built-in alignment.
- Deployment with TGI: Specifically prepared to avoid common deployment hurdles with Hugging Face's text-generation-inference.