yamatazen/Aurora-SCE-12B

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:12BQuant:FP8Ctx Length:32kPublished:Mar 1, 2025Architecture:Transformer0.0K Warm

yamatazen/Aurora-SCE-12B is a 12 billion parameter ChatML model created by yamatazen, formed by merging several pre-trained language models using the SCE merge method. It utilizes PocketDoc/Dans-PersonalityEngine-V1.1.0-12b and nbeerbower/Mistral-Nemo-12B-abliterated-LORA as its base, integrating models like LatitudeGames/Wayfarer-12B and NeverSleep/Lumimaid-v0.2-12B. This model is designed for chat-based applications, leveraging the combined strengths of its constituent models.

Loading preview...

Aurora-SCE-12B: A Merged ChatML Model

Aurora-SCE-12B is a 12 billion parameter language model developed by yamatazen, specifically designed for ChatML applications. This model is a product of the SCE (Selective Channel Ensemble) merge method, combining the strengths of multiple pre-trained models to enhance its capabilities.

Merge Details

The core of Aurora-SCE-12B is built upon a base consisting of PocketDoc/Dans-PersonalityEngine-V1.1.0-12b and nbeerbower/Mistral-Nemo-12B-abliterated-LORA. To further enrich its performance, the following models were integrated:

  • LatitudeGames/Wayfarer-12B
  • NeverSleep/Lumimaid-v0.2-12B
  • Elizezen/Himeyuri-v0.1-12B
  • inflatebot/MN-12B-Mag-Mell-R1

This merging process, configured with normalize: true and select_topk: 0.5 parameters, aims to create a robust model suitable for conversational AI tasks. The bfloat16 data type was used during the merge.

Intended Use

Aurora-SCE-12B is primarily intended for use in ChatML-formatted conversational agents and applications requiring nuanced dialogue generation, benefiting from the diverse characteristics of its merged components.