vihangd/smartyplats-7b-v2

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kPublished:Nov 23, 2023License:apache-2.0Architecture:Transformer Open Weights Cold

SmartyPlats-7b-v2 is an experimental 7 billion parameter language model developed by vihangd, based on the Mistral architecture. This model is fine-tuned using QLoRA on Alpaca-style datasets, making it suitable for instruction-following tasks. It utilizes an Alpaca-style prompt template, optimizing its performance for conversational and command-based interactions.

Loading preview...

SmartyPlats-7b-v2 Overview

SmartyPlats-7b-v2 is an experimental 7 billion parameter language model developed by vihangd. It is built upon the Mistral 7B architecture and has been fine-tuned using the QLoRA method. This fine-tuning process leverages Alpaca-style datasets, which are known for their instruction-following capabilities.

Key Capabilities

  • Instruction Following: Optimized for understanding and executing commands or instructions provided in natural language.
  • Alpaca Prompt Template: Designed to work seamlessly with Alpaca-style prompt formats, ensuring consistent and effective interaction.
  • Experimental Fine-tune: Represents an ongoing development, offering insights into the performance of Mistral 7B with specific fine-tuning approaches.

Good For

  • Instruction-based tasks: Ideal for applications requiring the model to respond to direct instructions or questions.
  • Conversational AI: Suitable for building chatbots or interactive agents that follow a clear dialogue structure.
  • Research and Experimentation: A valuable base for further fine-tuning or exploring the impact of Alpaca-style data on Mistral models.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p