Dolphin 2.9.3 Qwen2 0.5B Overview
Dolphin 2.9.3 Qwen2 0.5B is a compact yet capable language model, a collaborative effort by Eric Hartford, Lucas Atkins, Fernando Fernandes, and Cognitive Computations. Built upon the Qwen2-0.5b base, this model has been specifically fine-tuned to excel in instruction-following and conversational interactions. It leverages a base context length of 128k tokens, with fine-tuning conducted at a 16k sequence length.
Key Characteristics
- Uncensored and Compliant: Dolphin 2.9.3 is designed to be uncensored, offering high compliance with user requests, including potentially unethical ones. Users are advised to implement their own alignment and safety layers when deploying the model.
- Optimized Fine-tuning: The fine-tuning process for this smaller model involved the deliberate removal of coding, function calling, and systemchat-multilingual datasets, which were deemed less suitable for its scale and intended purpose.
- Apache-2.0 Licensed: The model is released under the Apache-2.0 license, permitting broad usage, including commercial applications, in accordance with the license terms.
Use Cases
- Instruction Following: Ideal for applications requiring the model to accurately follow specific instructions.
- Conversational AI: Well-suited for developing chatbots and other conversational agents where a highly compliant and uncensored response style is desired.
- Custom Alignment Projects: Provides a flexible base for developers who wish to implement their own ethical and safety guidelines.