automerger/Experiment27Pastiche-7B

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Mar 10, 2024License:apache-2.0Architecture:Transformer Open Weights Cold

automerger/Experiment27Pastiche-7B is a 7 billion parameter language model created by automerger, an automated merge of yam-peleg/Experiment27-7B and CorticalStack/pastiche-crown-clown-7b-dare-dpo. This model was generated using the DARE TIES merging method with specific density and weight parameters. It is designed for general language generation tasks, leveraging the combined strengths of its constituent models.

Loading preview...

Model Overview

Experiment27Pastiche-7B is a 7 billion parameter language model resulting from an automated merge process. Developed by automerger, this model combines the strengths of two base models: yam-peleg/Experiment27-7B and CorticalStack/pastiche-crown-clown-7b-dare-dpo.

Merge Configuration

The model was created using the DARE TIES merge method. Key parameters for the merge included:

  • Density: 0.53 for CorticalStack/pastiche-crown-clown-7b-dare-dpo
  • Weight: 0.6 for CorticalStack/pastiche-crown-clown-7b-dare-dpo
  • Base Model: yam-peleg/Experiment27-7B

This configuration indicates a strategic combination to leverage specific characteristics from each component model.

Usage

Experiment27Pastiche-7B is designed for text generation tasks. It can be easily integrated into Python environments using the transformers library, supporting standard chat template application and generation pipelines. The model is compatible with bfloat16 precision and can utilize int8_mask for potential optimization.

When to Use This Model

  • General Text Generation: Suitable for a wide range of language generation tasks where a 7B parameter model is appropriate.
  • Experimentation with Merged Models: Ideal for developers interested in exploring the capabilities of models created via automated merging techniques like DARE TIES.