Model Overview
caisarl76/Mistral-7B-orca-platy-2k-ep4 is a fine-tuned language model developed by Minds And Company, built upon the Mistral-7B-v0.1 backbone. It leverages the HuggingFace Transformers library for its implementation. The model's training incorporates a blend of the kyujinpy/KOpen-platypus and kyujinpy/OpenOrca-KO datasets, with a focus on instruction-following capabilities using the Llama Prompt Template.
Key Characteristics
- Base Model: Mistral-7B-v0.1, known for its efficiency and strong performance in its size class.
- Training Data: Fine-tuned on a combination of Orca and Platypus datasets, which are designed to enhance reasoning and instruction-following abilities.
- Prompt Format: Utilizes the Llama Prompt Template, ensuring compatibility and optimized performance with this specific instruction format.
Limitations and Responsible Use
As a fine-tuned variant of Llama-based technology, this model carries inherent risks, including the potential for inaccurate, biased, or objectionable outputs. Developers are strongly advised to conduct thorough safety testing and tuning tailored to their specific applications before deployment. The model is subject to the license and usage restrictions of the original Llama-2 model, and comes without warranty or guarantees.
Citations
Relevant research and datasets cited include the original Orca paper, the Orca-best dataset, and the Llama 2 publication.