lizpreciatior/lzlv_70b_fp16_hf
lizpreciatior/lzlv_70b_fp16_hf is a 69 billion parameter LLaMA2-based language model merge, specifically designed for enhanced roleplaying and creative writing tasks. It combines Nous-Hermes-Llama2-70b, Xwin-LM-7B-V0.1, and Mythospice-70b to balance instruction-following with creative output. This model aims to provide a more intelligent and creative experience for complex scenarios, particularly excelling in creative and potentially NSFW-inclined content.
Loading preview...
Overview
lzlv_70b_fp16_hf is a 69 billion parameter language model created by lizpreciatior, resulting from a multi-model merge of several LLaMA2 70B finetunes. The primary goal of this merge was to develop a model that combines strong instruction-following capabilities with high creativity, specifically targeting roleplaying and creative writing applications. The merging process was inspired by Undi95's approach for 13B models, utilizing SLERP gradients to combine the components.
Key Components and Their Contributions
- NousResearch/Nous-Hermes-Llama2-70b: Contributes to roleplaying capabilities.
- Xwin-LM/Xwin-LM-7B-V0.1: Serves as a base, providing excellent instruction-following and inherent creativity.
- Doctor-Shotgun/Mythospice-70b: Adds a creative and NSFW-oriented dimension to the model.
Performance and Use Cases
Subjective testing indicates that lzlv_70b_fp16_hf retains the instruction-following strengths of Xwin-70B while significantly enhancing creative output. It is noted for handling complex creative scenarios effectively, producing outputs that are more imaginative and potentially NSFW-inclined compared to its base models. The model uses the Vicuna prompt format.
Quantized Versions
Quantized GGUF versions of lzlv_70b are available, provided by TheBloke, offering a range of quantization options for broader accessibility.