yamatazen/Shisa-v2-Mistral-Nemo-12B-Abliterated

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:12BQuant:FP8Ctx Length:32kPublished:Apr 21, 2025Architecture:Transformer0.0K Warm

yamatazen/Shisa-v2-Mistral-Nemo-12B-Abliterated is a merged language model created by yamatazen using the TIES merge method. It combines shisa-ai/shisa-v2-mistral-nemo-12b as a base with natong19/Mistral-Nemo-Instruct-2407-abliterated. This model is designed to leverage the strengths of its constituent models through a specific merging technique.

Loading preview...

Model Overview

The yamatazen/Shisa-v2-Mistral-Nemo-12B-Abliterated model is a product of a merge operation, specifically utilizing the TIES merge method. This technique combines pre-trained language models to potentially enhance their capabilities or adapt them for specific tasks.

Merge Details

This model was constructed using shisa-ai/shisa-v2-mistral-nemo-12b as its foundational base model. The primary model integrated into this merge is natong19/Mistral-Nemo-Instruct-2407-abliterated. The merging process was configured with a density and weight of 1.0 for the included model, and normalization was applied during the TIES merge, using bfloat16 for data types.

Key Characteristics

  • Merge Method: Employs the TIES (Trimmed, Iterative, and Selective) merge method, known for its ability to combine models effectively.
  • Base Model: Built upon shisa-ai/shisa-v2-mistral-nemo-12b.
  • Integrated Model: Incorporates natong19/Mistral-Nemo-Instruct-2407-abliterated to potentially inherit or augment its features.

Potential Use Cases

This model is suitable for developers and researchers interested in:

  • Experimenting with merged language models and the TIES method.
  • Leveraging the combined strengths of the specified base and integrated models.
  • Applications where a model derived from these specific components might offer performance advantages.