TehVenom/Pygmalion-Vicuna-1.1-7b
TehVenom/Pygmalion-Vicuna-1.1-7b is a 7 billion parameter language model, merging Pygmalion-7b and Vicuna v1.1, designed for enhanced conversational and roleplay capabilities. This model combines Pygmalion's strengths in character interaction with Vicuna's instruction-following properties. With a 4096-token context length, it is optimized for engaging chat and roleplaying scenarios, often generating NSFW content due to its Pygmalion influence.
Loading preview...
Model Overview
TehVenom/Pygmalion-Vicuna-1.1-7b is a 7 billion parameter language model created by TehVenom, resulting from a weighted average merge of the LLaMA-based Pygmalion-7b (60%) and lmsys's Vicuna v1.1 (40%) deltas. This unique combination aims to leverage Pygmalion's strong tendencies for chatting and roleplaying while integrating some of Vicuna's assistant-like and instruction-following capabilities. The model has a context length of 4096 tokens.
Key Characteristics
- Hybrid Architecture: Blends Pygmalion-7b and Vicuna v1.1 for a balanced conversational experience.
- Roleplay & Chat Focused: Heavily leans towards generating engaging dialogue and character-driven interactions.
- Instruction Following: Inherits some of Vicuna's ability to follow instructions and act as a helpful assistant.
- Potential for NSFW Content: Due to the significant influence of Pygmalion, the model is very likely to generate content considered Not Safe For Work.
Usage Considerations
- Prompting: Users are advised to experiment with Pygmalion's prompt styles initially, then try a mix of both Pygmalion and Vicuna styles to achieve desired results.
- Standard Integration: Functions as a normal Hugging Face Transformers model, allowing for straightforward integration into existing workflows.