mncai/Mistral-7B-v0.1-orca_platy-2k
The mncai/Mistral-7B-v0.1-orca_platy-2k model is a fine-tuned variant of the Mistral-7B-v0.1 backbone developed by Minds And Company. This model leverages the Mistral architecture and is fine-tuned on a combination of the kyujinpy/KOpen-platypus and kyujinpy/OpenOrca-KO datasets, utilizing the Llama Prompt Template. It is designed for general language understanding and generation tasks, building upon the capabilities of its Mistral base.
Loading preview...
Model Overview
The mncai/Mistral-7B-v0.1-orca_platy-2k is a language model developed by Minds And Company, built upon the Mistral-7B-v0.1 backbone. It utilizes the HuggingFace Transformers library for its implementation.
Training Details
This model has been fine-tuned using a combination of two key datasets:
- kyujinpy/KOpen-platypus
- kyujinpy/OpenOrca-KO
The fine-tuning process employs the Llama Prompt Template for instruction following.
Limitations and Responsible Use
As a fine-tuned variant of a Llama-based model, it carries inherent risks. The model's outputs cannot be predicted in all scenarios and may occasionally produce inaccurate, biased, or objectionable responses. Users are advised to perform safety testing and tuning specific to their applications before deployment. The model's license and usage are subject to the restrictions of the original Llama-2 model, and it comes without warranty.
Citation
For academic or research purposes, users should cite the original Orca paper and the Llama 2 publication, as well as the Orca-best dataset if relevant.