mncai/Mistral-7B-v0.1-platy-1k

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kLicense:mitArchitecture:Transformer Open Weights Cold

mncai/Mistral-7B-v0.1-platy-1k is a 7 billion parameter language model developed by Minds And Company, fine-tuned from Mistral-7B-v0.1. This model is specialized for instruction following, leveraging datasets like kyujinpy/KOpen-platypus. It utilizes the Llama Prompt Template, making it suitable for applications requiring precise conversational responses and adherence to specific instructions.

Loading preview...

Model Overview

mncai/Mistral-7B-v0.1-platy-1k is a 7 billion parameter language model developed by Minds And Company, built upon the Mistral-7B-v0.1 backbone. This model is fine-tuned using the HuggingFace Transformers library and is designed for instruction-following tasks.

Key Capabilities

  • Instruction Following: Fine-tuned on datasets such as kyujinpy/KOpen-platypus to enhance its ability to understand and execute instructions.
  • Llama Prompt Template: Utilizes the Llama Prompt Template for consistent and effective interaction.

Training Details

The model's training incorporates the kyujinpy/KOpen-platypus dataset, which contributes to its specialized instruction-following capabilities. It is important to note that the model's license and usage are bound by the restrictions of the original Llama-2 model, and developers should perform safety testing for specific applications.

Limitations

As with all large language models, mncai/Mistral-7B-v0.1-platy-1k may produce inaccurate, biased, or objectionable responses. Testing has primarily been in English and may not cover all scenarios. Users are advised to consult the Llama Responsible Use Guide and conduct thorough safety testing before deployment.