NeuralNovel/Tanuki-7B-v0.1 Overview
NeuralNovel/Tanuki-7B-v0.1 is a 7 billion parameter language model, fine-tuned from Mistral-7B-Instruct-v0.2. This model is specifically designed for generating instructive and narrative text, with a strong focus on roleplay and short storytelling. It aims to provide detailed and creative responses within complex narrative contexts.
Key Capabilities & Features
- Narrative Generation: Excels at producing creative and detailed narrative content.
- Roleplay Optimization: Tailored for engaging and complex roleplay scenarios.
- Base Model: A full-parameter fine-tune (FFT) of the robust Mistral-7B-Instruct-v0.2.
- Licensing: Released under the Apache-2.0 license, permitting both commercial and non-commercial applications.
- Training Data: Fine-tuned using proprietary datasets, Neural-Story-v1 and Creative-Logic-v1.
Performance & Limitations
Evaluations on the Open LLM Leaderboard show an average score of 64.74. Specific metric scores include:
- AI2 Reasoning Challenge (25-Shot): 62.80
- HellaSwag (10-Shot): 83.14
- MMLU (5-Shot): 60.54
- TruthfulQA (0-shot): 66.33
- Winogrande (5-shot): 75.85
- GSM8k (5-shot): 39.80
This model is not optimized for tasks outside of instructive and narrative text generation, and performance may be suboptimal in unrelated scenarios. Users should be aware of potential biases or limitations inherited from its training data, which may include genre or writing style biases.