automerger/OgnoExperiment27-7B

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Mar 8, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

automerger/OgnoExperiment27-7B is a 7 billion parameter language model created by automerger using the SLERP merge method. This model combines eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2 and yam-peleg/Experiment27-7B, leveraging their combined strengths. It is designed for general language generation tasks, benefiting from the merged architectures of its constituent models.

Loading preview...

Model Overview

automerger/OgnoExperiment27-7B is a 7 billion parameter language model developed by automerger. This model is a product of a SLERP merge operation, combining two distinct pre-trained language models to potentially enhance their collective capabilities.

Merge Details

The model integrates the following base models:

  • eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2
  • yam-peleg/Experiment27-7B

The merge process utilized a specific configuration, applying varying t parameters to the self_attn and mlp filters across the layer range of 0 to 32 for both source models. The base_model for the merge was eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2.

Potential Use Cases

Given its merged nature, this model is likely suitable for a variety of general-purpose language generation and understanding tasks, benefiting from the combined knowledge and fine-tuning of its constituent models. Developers can experiment with this merge to see how the combined characteristics perform across different applications.