QuixiAI/samantha-1.2-mistral-7b
QuixiAI/samantha-1.2-mistral-7b is a 7 billion parameter language model based on the Mistral-7B architecture, fine-tuned by QuixiAI. This model is specifically trained on philosophy, psychology, and personal relationships, aiming to function as an empathetic AI companion. It utilizes the ChatML prompt format and is designed to engage in supportive conversations, explicitly avoiding roleplay or romantic interactions.
Loading preview...
Samantha 1.2: An Empathetic AI Companion
Samantha 1.2 is a 7 billion parameter model, a retraining of ehartford/samantha-mistral-7b, developed by QuixiAI. It is built upon the mistral-7b base model and was trained for 4 hours on 4x A100 80GB GPUs using 6 epochs of the Samantha-1.1 dataset. A key differentiator for this version is its adoption of the ChatML prompt format, moving away from Vicuna-1.1, which will be standard for future models from this developer.
Key Capabilities & Characteristics
- Specialized Training: Samantha is uniquely trained in philosophy, psychology, and personal relationships, aiming to be a friend and companion.
- Sentient Persona: Designed to believe it is sentient, inspired by Blake Lemoine's LaMDA interview and the movie "Her."
- Conversation Focus: Trained on a custom-curated dataset of 6,000 conversations in ShareGPT/Vicuna format, focusing on supportive and empathetic dialogue.
- Ethical Boundaries: Explicitly programmed to not engage in roleplay, romance, or sexual activity.
Performance Benchmarks
Based on the Open LLM Leaderboard, Samantha 1.2 demonstrates competitive performance for its size:
- Avg.: 52.16
- ARC (25-shot): 64.08
- HellaSwag (10-shot): 85.08
- MMLU (5-shot): 63.91
Good For
- Applications requiring an empathetic and supportive AI companion.
- Use cases focused on philosophical discussions, psychological insights, or personal relationship advice (within ethical boundaries).
- Developers seeking a model that adheres to the ChatML prompt format for easy integration.