Norquinal/Mistral-7B-claude-chat

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kPublished:Sep 27, 2023License:cc-by-nc-4.0Architecture:Transformer0.0K Open Weights Cold

Norquinal/Mistral-7B-claude-chat is a 7 billion parameter Mistral-7B-v0.1 model fine-tuned by Norquinal using QLoRA (4-bit precision). This model is specifically optimized for multi-round chat interactions, having been trained on a dataset derived from Claude conversations. It excels at generating helpful, detailed, and polite responses in a conversational context, following the Vicuna 1.1 prompt format.

Loading preview...

Norquinal/Mistral-7B-claude-chat Overview

This model is a 7 billion parameter variant of the Mistral-7B-v0.1 base model, developed by Norquinal. It has been fine-tuned using QLoRA (4-bit precision) on a specialized dataset, claude_multiround_chat_1k, which is a subset of a larger 30,000-sample dataset of Claude conversations.

Key Capabilities

  • Multi-round Chat: Optimized for engaging in extended, multi-turn conversational exchanges.
  • Helpful and Detailed Responses: Designed to provide comprehensive and polite answers to user queries.
  • Vicuna 1.1 Prompt Format: Adheres to the standard Vicuna 1.1 prompt structure for chat interactions.

Good For

  • Conversational AI Applications: Ideal for chatbots, virtual assistants, and interactive dialogue systems where detailed and polite responses are crucial.
  • Generating Human-like Dialogue: Excels at producing natural-sounding and contextually relevant chat outputs.

Users can integrate this model into platforms like Text Generation Web UI, with specific instructions provided for transformers library compatibility.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p