nothingiisreal/MN-12B-Starcannon-v2

Warm
Public
12B
FP8
32768
License: apache-2.0
Hugging Face
Overview

MN-12B-Starcannon-v2 Overview

MN-12B-Starcannon-v2 is a 12 billion parameter language model developed by nothingiisreal, featuring a substantial 32768 token context length. This model was constructed using the TIES merge method, combining two base models: nothingiisreal/MN-12B-Celeste-V1.9 and intervitens/mini-magnum-12b-v1.1. The merge configuration weighted both constituent models equally at 0.5, with specific density parameters applied.

Key Characteristics

  • Creative Output: Designed to be more creative and less restrictive than some other models, offering a distinct output style.
  • Higher Verbosity: Tends to produce more detailed and extensive responses, suitable for applications requiring rich text generation.
  • Merge-based Architecture: Leverages the strengths of its base models through a TIES merge, aiming for a unique blend of capabilities.

Use Cases

This model is particularly well-suited for scenarios demanding imaginative and verbose text generation. Its creative inclination makes it a strong candidate for:

  • Creative writing and storytelling.
  • Role-playing scenarios requiring detailed narrative.
  • Generating expansive and descriptive content where high verbosity is an asset.