ajtaltarabukin2022/merged_beat_champ_2model_slerp

TEXT GENERATIONConcurrency Cost:2Model Size:32BQuant:FP8Ctx Length:32kPublished:Apr 18, 2026Architecture:Transformer Cold

The ajtaltarabukin2022/merged_beat_champ_2model_slerp is a 32 billion parameter language model created by ajtaltarabukin2022, formed by merging two pre-trained models using the SLERP method. This model combines dura-lori/affine-5DoKPQhZmKnFk4mNEmH4UorbqHDe3PFAPvEfJyDwNkimoAMe and RLStepone/Affine-h29-5Coip2NhkPhFCMLQ7LYs3zLVz9RSEZP7HJrakDeqM5RVdPs4, with a specific weighting of 60% and 40% respectively across all 64 layers. It is designed for general language tasks, leveraging the combined strengths of its constituent models.

Loading preview...

Model Overview

The ajtaltarabukin2022/merged_beat_champ_2model_slerp is a 32 billion parameter language model with a 32768 token context length, created by ajtaltarabukin2022. This model is a product of a merge operation using mergekit, specifically employing the SLERP (Spherical Linear Interpolation) method.

Merge Details

This model was constructed by combining two distinct pre-trained language models:

  • dura-lori/affine-5DoKPQhZmKnFk4mNEmH4UorbqHDe3PFAPvEfJyDwNkimoAMe
  • RLStepone/Affine-h29-5Coip2NhkPhFCMLQ7LYs3zLVz9RSEZP7HJrakDeqM5RVdPs4

The merge configuration applied a weighting of 60% to the dura-lori model and 40% to the RLStepone model across all 64 layers. This specific interpolation aims to blend the capabilities of the base models into a unified, more robust model.

Intended Use

As a merged model, ajtaltarabukin2022/merged_beat_champ_2model_slerp is suitable for a broad range of general-purpose language understanding and generation tasks, inheriting and combining the strengths of its constituent models.