Local-Novel-LLM-project/Ninja-V2-7B
Ninja-V2-7B is a 7 billion parameter base model created by Local-Novel-LLM-project, developed with the assistance of a high-performance GPU server from the LocalAI Hackathon. This model was constructed using vector merging techniques, combining components like MTSAIR/multi_verse_model and HuggingFaceH4/zephyr-7b-beta. It is designed for a wide range of tasks, including chat and creative writing, offering broad utility beyond simple conversational AI.
Loading preview...
Ninja-V2-7B Overview
Ninja-V2-7B is a 7 billion parameter base model developed by Local-Novel-LLM-project, created with support from the LocalAI Hackathon's high-performance GPU servers. This model leverages advanced vector merging techniques to combine various foundational models, enhancing its capabilities across diverse applications.
Key Capabilities
- Versatile Task Handling: Designed to excel in a broad spectrum of tasks, including general chat and creative writing.
- Advanced Merging: Built using a multi-stage merging process, incorporating models such as
MTSAIR/multi_verse_model,HuggingFaceH4/zephyr-7b-beta,Elizezen/Phos-7B,Elizezen/Antler-7B,NTQAI/chatntq-ja-7b-v1.0, andElizezen/Berghof-NSFW-7B. - Context Length: Supports an 8192-token context window, allowing for processing longer inputs and generating more coherent outputs.
Usage Guidelines
- Prompt Template: While not strictly required, the Vicuna-1.1 template can be used. For simple text generation, no specific template is recommended.
- System Prompt Best Practices: When defining system prompts, use declarative statements (e.g., "あなたは○○です" - "You are ○○") rather than behavioral instructions (e.g., "あなたは○○として振る舞います" - "You act as ○○"). Similarly, use active voice for capabilities (e.g., "あなたは○○をします" - "You do ○○") instead of passive or potential statements.
Licensing
- The model is distributed under the Apache-2.0 license.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.