Model Overview
mncai/Mistral-7B-v0.1-platy-1k is a 7 billion parameter language model developed by Minds And Company, built upon the Mistral-7B-v0.1 backbone. This model is fine-tuned using the HuggingFace Transformers library and is designed for instruction-following tasks.
Key Capabilities
- Instruction Following: Fine-tuned on datasets such as
kyujinpy/KOpen-platypus to enhance its ability to understand and execute instructions. - Llama Prompt Template: Utilizes the Llama Prompt Template for consistent and effective interaction.
Training Details
The model's training incorporates the kyujinpy/KOpen-platypus dataset, which contributes to its specialized instruction-following capabilities. It is important to note that the model's license and usage are bound by the restrictions of the original Llama-2 model, and developers should perform safety testing for specific applications.
Limitations
As with all large language models, mncai/Mistral-7B-v0.1-platy-1k may produce inaccurate, biased, or objectionable responses. Testing has primarily been in English and may not cover all scenarios. Users are advised to consult the Llama Responsible Use Guide and conduct thorough safety testing before deployment.