Undi95/Emerhyst-13B
Undi95/Emerhyst-13B is a 13 billion parameter language model developed by Undi95, created by merging Amethyst 13B and Emerald 13B, and incorporating LimaRP v3. This model is designed as a lighter alternative to Emerhyst-20B, making it suitable for systems with lower specifications. It utilizes the Alpaca prompt template and is optimized for conversational and role-playing applications, leveraging components like PygmalionAI/pygmalion-2-13b and LimaRP-Llama2-13B-v3-EXPERIMENT.
Loading preview...
Emerhyst-13B Overview
Undi95/Emerhyst-13B is a 13 billion parameter language model developed by Undi95, serving as a more accessible version of the larger Emerhyst-20B. This model is a strategic merge of two distinct 13B models: Amethyst 13B and Emerald 13B. It further integrates LimaRP v3, a component known for enhancing role-playing capabilities, and users are encouraged to consult its documentation for optimal usage.
Key Capabilities & Features
- Optimized for Lower Specifications: Designed to be usable on systems with less computational power, offering a viable alternative to larger models.
- Merged Architecture: Combines the strengths of Amethyst 13B and Emerald 13B, aiming for a balanced performance profile.
- Role-Playing Enhancement: Incorporates LimaRP v3, suggesting a focus on interactive and character-driven conversational tasks.
- Alpaca Prompt Template: Utilizes the widely recognized Alpaca instruction format for consistent input and response generation.
- Component Integration: Built upon a foundation including models like PygmalionAI/pygmalion-2-13b and Xwin-LM/Xwin-LM-13B-V0.1, indicating a blend of diverse training data and architectures.
When to Use Emerhyst-13B
- Resource-Constrained Environments: Ideal for users who need a capable 13B model but cannot run larger alternatives like Emerhyst-20B.
- Conversational AI & Role-Playing: Particularly well-suited for applications requiring engaging dialogue and character interaction, given its integration of LimaRP v3.
- Experimentation with Merged Models: Offers a practical example of how merging different models can create new capabilities or optimize for specific hardware.
Users are advised to follow the recommended settings for LimaRP v3, especially when configuring instruction formats in interfaces like SillyTavern, to achieve desired response lengths and conversational styles.