redrix/GodSlayer-12B-ABYSS

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:12BQuant:FP8Ctx Length:32kPublished:Jan 22, 2025Architecture:Transformer0.0K Warm

redrix/GodSlayer-12B-ABYSS is a 12 billion parameter language model created by redrix through a NuSLERP merge of multiple pre-trained models, using IntervitensInc/Mistral-Nemo-Base-2407-chatml as its base. This model is designed to provide stable and coherent responses while counteracting positivity-bias, aiming for improved realism and diverse outputs. It focuses on generating more nuanced and less overtly positive content compared to many other LLMs, making it suitable for applications requiring a broader emotional and thematic range.

Loading preview...

GodSlayer-12B-ABYSS: A Nuanced Language Model

redrix/GodSlayer-12B-ABYSS is a 12 billion parameter language model developed by redrix using the NuSLERP merge method. Its foundation is the IntervitensInc/Mistral-Nemo-Base-2407-chatml model, combined with several specialized models to achieve its unique characteristics.

Key Capabilities

  • Bias Counteraction: Specifically engineered to counteract common positivity-bias found in many LLMs, promoting more balanced and realistic outputs.
  • Enhanced Realism: Aims to improve the realism of generated text, moving beyond overly optimistic or saccharine responses.
  • Diverse Responses: Designed to produce a wider array of responses, offering greater thematic and emotional diversity in its generations.
  • Stable and Coherent: Despite its specialized focus, the model maintains a commitment to stable and coherent text generation.

Merge Details

This model is the result of a multi-stage merge process, incorporating contributions from various models including LatitudeGames/Wayfarer-12B, ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2, PocketDoc/Dans-PersonalityEngine-V1.1.0-12b, HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407, romaingrx/red-teamer-mistral-nemo, DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS, rAIfle/Questionable-MN-bf16, and allura-org/MN-12b-RP-Ink. The merging process utilized mergekit and della_linear and nuslerp methods, with bfloat16 dtype and chatml chat template.

Good For

  • Applications requiring less biased or overtly positive text generation.
  • Scenarios where diverse and realistic emotional or thematic range is crucial.
  • Creative writing or role-playing that benefits from nuanced character interactions and less predictable outcomes.
  • Use cases needing a 12B parameter model with a 32768 token context length that prioritizes realism over inherent positivity.