mncai/Mistral-7B-v0.1-alpaca-2k

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kLicense:mitArchitecture:Transformer Open Weights Cold

mncai/Mistral-7B-v0.1-alpaca-2k is a 7 billion parameter language model developed by Minds And Company, fine-tuned from Mistral-7B-v0.1. This model is instruction-tuned using the KoAlpaca-v1.1av dataset and utilizes the Llama Prompt Template. It is designed for general language generation tasks, leveraging its Mistral backbone for efficient performance.

Loading preview...

Model Overview

mncai/Mistral-7B-v0.1-alpaca-2k is an instruction-tuned language model developed by Minds And Company, built upon the Mistral-7B-v0.1 backbone. This model leverages the robust architecture of Mistral-7B-v0.1, a 7 billion parameter model known for its efficiency and strong performance in its size class.

Key Capabilities & Training

  • Backbone Model: Utilizes the mistralai/Mistral-7B-v0.1 as its foundational architecture.
  • Instruction Tuning: Fine-tuned using the beomi/KoAlpaca-v1.1av dataset, enhancing its ability to follow instructions and generate coherent responses.
  • Prompt Template: Employs the Llama Prompt Template for consistent input formatting.
  • Library: Developed and integrated using the HuggingFace Transformers library.

Limitations and Responsible Use

As a fine-tuned variant of Llama 2 (due to the Llama Prompt Template and license disclaimer referencing Llama 2), this model carries inherent risks associated with large language models, including potential for inaccurate, biased, or objectionable outputs. Users are advised to perform thorough safety testing and tuning for specific applications. The model is bound by the license and usage restrictions of the original Llama-2 model and comes without warranty.

Good For

  • General instruction-following tasks.
  • Applications requiring a 7 billion parameter model with a Mistral backbone.
  • Experimentation with models fine-tuned on Alpaca-style datasets.