The royallab/PsyOrca2-13b-DARE is a 13 billion parameter language model based on the Llama 2 architecture, created by royallab. This model is a merge of KoboldAI/PsyFighter-2-13b and microsoft/Orca-2-13b, utilizing the DARE merge algorithm to combine their capabilities. It is primarily designed for roleplaying scenarios, inheriting biases from niche online forums, and is not intended for factual information or advice. The model supports a 4096-token context length and can follow both Alpaca instruct and Orca ChatML formats.
Loading preview...
PsyOrca2-13b-DARE: A Merged Llama 2 Model for Roleplaying
The royallab/PsyOrca2-13b-DARE is a 13 billion parameter language model built upon the Llama 2 architecture. It represents an experimental merge of two distinct models:
- KoboldAI/PsyFighter-2-13b: A model known for its roleplaying capabilities.
- microsoft/Orca-2-13b: An instruction-tuned model.
This merge was executed using the DARE merge algorithm, specifically the dare_ties method, with the goal of exploring its effectiveness in combining these two models. The base model for this merge was meta-llama/Llama-2-13b-hf.
Key Characteristics & Usage
- Architecture: Llama 2-based, 13 billion parameters.
- Merge Method: Utilizes the DARE (Dropout-Aware Rank-reduced Ensembling) algorithm.
- Context Length: Supports a 4096-token context.
- Instruction Formats: Compatible with both Alpaca instruct format and Orca ChatML format.
Intended Use and Limitations
This model is primarily geared towards roleplaying applications. Users should be aware of its inherent biases, which are similar to those found in niche online roleplaying communities, in addition to biases from its base models. It is explicitly not intended for generating factual information or providing advice of any kind. For detailed training information, users are directed to the repositories of the merged constituent models.