teleboas/alpaca_mistral-7b-v0.2

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Mar 24, 2024License:apache-2.0Architecture:Transformer Open Weights Cold

teleboas/alpaca_mistral-7b-v0.2 is a 7 billion parameter language model developed by teleboas, fine-tuned from Mistral-7B-v0.2. This model specializes in instruction-following tasks, having been trained on the yahma/alpaca-cleaned dataset. It leverages the Mistral architecture with a 4096-token context length, making it suitable for general-purpose conversational AI and text generation applications.

Loading preview...

Model Overview

alpaca_mistral-7b-v0.2 is a 7 billion parameter language model developed by teleboas. It is a fine-tuned version of mistral-7b-v0.2, specifically optimized for instruction-following capabilities.

Key Characteristics

  • Base Model: Fine-tuned from unsloth/mistral-7b-v0.2-bnb-4bit, which is based on the Mistral-7B-v0.2 architecture.
  • Training Data: Utilizes the yahma/alpaca-cleaned dataset for instruction-tuning, enhancing its ability to follow commands and generate relevant responses.
  • Parameter Count: Features 7 billion parameters, offering a balance between performance and computational efficiency.
  • Context Length: Supports a context window of 4096 tokens, allowing for processing moderately long inputs.

Use Cases

This model is well-suited for applications requiring a capable instruction-following language model, such as:

  • General-purpose chatbots and conversational agents.
  • Text generation based on specific prompts or instructions.
  • Prototyping and development where a smaller, efficient instruction-tuned model is beneficial.

Quantized Versions

A GGUF version of this model is available for optimized local inference: teleboas/alpaca_mistral-7b-v0.2-GGUF.