TeeZee/Xwin-LM-70B-V0.1_Jannie

TEXT GENERATIONConcurrency Cost:4Model Size:69BQuant:FP8Ctx Length:32kPublished:Jan 14, 2024License:llama2Architecture:Transformer Open Weights Cold

TeeZee/Xwin-LM-70B-V0.1_Jannie is a 69 billion parameter merged language model combining Xwin-LM/Xwin-LM-70B-V0.1 and v2ray/LLaMA-2-Jannie-70B-QLoRA, featuring a 32K context length. This model is designed to seamlessly generate both SFW and NSFW content, retaining the original model's qualities while offering distinct NSFW output. It is suitable for applications requiring flexible content generation across various contexts.

Loading preview...

Model Overview

TeeZee/Xwin-LM-70B-V0.1_Jannie is a 69 billion parameter language model created by merging the Xwin-LM/Xwin-LM-70B-V0.1 base model with the v2ray/LLaMA-2-Jannie-70B-QLoRA fine-tune. This merge aims to combine the strengths of both parent models, resulting in a versatile model with a 32,768 token context length.

Key Capabilities

  • Content Generation Flexibility: The model is noted for its ability to produce both Safe-for-Work (SFW) and Not-Safe-for-Work (NSFW) content without issues, switching contexts seamlessly.
  • Retained Qualities: It is reported to retain the positive attributes of its original base model while providing a distinct approach to NSFW content generation compared to standard LimaRP outputs.

Performance Benchmarks

Evaluated on the Open LLM Leaderboard, the model demonstrates competitive performance across several metrics:

  • Average Score: 68.26
  • AI2 Reasoning Challenge (25-Shot): 71.16
  • HellaSwag (10-Shot): 86.86
  • MMLU (5-Shot): 69.56
  • TruthfulQA (0-shot): 60.14
  • Winogrande (5-shot): 81.06
  • GSM8k (5-shot): 40.79

Use Cases

This model is particularly suited for applications requiring a language model capable of generating diverse content, including both general-purpose text and specific NSFW narratives, with the flexibility to adapt to different contextual demands.