dphn/dolphin-2.9.2-qwen2-7b

Warm
Public
7.6B
FP8
131072
License: apache-2.0
Hugging Face
Overview

Dolphin 2.9.2 Qwen2 7B Overview

Dolphin 2.9.2 Qwen2 7B is a 7.6 billion parameter language model developed by Eric Hartford, Lucas Atkins, Fernando Fernandes, and Cognitive Computations. Built upon the Qwen2-7b base model, it leverages a substantial 128k context length, with fine-tuning performed at a 16k sequence length. This model is notable for its uncensored nature, having been trained on a filtered dataset to remove alignment and bias, making it highly compliant to user requests, including potentially unethical ones. Users are advised to implement their own alignment layer for responsible deployment.

Key Capabilities

  • Instruction Following: Excels at understanding and executing diverse instructions.
  • Conversational AI: Capable of engaging in natural and coherent dialogues.
  • Coding Skills: Possesses abilities for code generation and understanding.
  • Agentic Abilities: Includes initial capabilities for agent-like behaviors.
  • Function Calling: Supports function calling mechanisms, enhancing its utility in complex workflows.

Important Considerations

  • Uncensored: The model is designed to be highly compliant with any request. Users must implement their own safety and alignment layers.
  • Licensing: Governed by the tongyi-qianwen license, allowing for commercial use in accordance with its terms.
  • Training Data: Trained on data generated from GPT4, among other sources.