ChaoticNeutrals/Eris_Remix_7B

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Mar 5, 2024License:otherArchitecture:Transformer0.0K Cold

ChaoticNeutrals/Eris_Remix_7B is a 7 billion parameter language model created by ChaoticNeutrals, developed using a slerp merge method from 'SpecialEdition' and 'Remix' base models. This model is configured with specific parameter weighting across self-attention and MLP layers, and is intended for general language generation tasks. It is provided in bfloat16 precision, with community-contributed GGUF and Exl2 quants available.

Loading preview...

Overview

ChaoticNeutrals/Eris_Remix_7B is a 7 billion parameter language model developed by ChaoticNeutrals. This model is a product of a slerp merge, combining elements from two base models, 'SpecialEdition' and 'Remix', with a specific configuration for its self-attention and MLP layers. The merge process utilized a YAML configuration to define the layer ranges and parameter weighting, aiming to blend characteristics from its source models.

Key Characteristics

  • Architecture: 7 billion parameters, derived from a slerp merge of 'SpecialEdition' and 'Remix' models.
  • Precision: Developed in bfloat16.
  • Merge Method: Employs a slerp merge with distinct weighting for self_attn and mlp layers.
  • Community Quants: Quantized versions are available, including GGUF-IQ-Imatrix by Lewdiculus and Exl2 5bpw by Test157t, enhancing accessibility for various hardware setups.

Intended Use Cases

This model is suitable for general language generation tasks where a 7B parameter model is appropriate. Its merged nature suggests a potential for balanced performance across different aspects inherited from its base models. The availability of quantized versions makes it a flexible option for deployment in environments with memory constraints.