arcee-ai/Saul-Base-Calme-7B-Instruct-slerp

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Mar 30, 2024License:apache-2.0Architecture:Transformer Open Weights Cold

arcee-ai/Saul-Base-Calme-7B-Instruct-slerp is a 7 billion parameter instruction-tuned language model created by arcee-ai. This model is a merge of Equall/Saul-Base and MaziyarPanahi/Calme-7B-Instruct-v0.1.1 using the slerp method, designed to combine the strengths of its constituent models. It features a 4096 token context length and is suitable for general instruction-following tasks.

Loading preview...

Model Overview

arcee-ai/Saul-Base-Calme-7B-Instruct-slerp is a 7 billion parameter instruction-tuned language model developed by arcee-ai. This model is a product of merging two distinct base models: Equall/Saul-Base and MaziyarPanahi/Calme-7B-Instruct-v0.1.1. The merge was performed using the slerp (spherical linear interpolation) method via mergekit, aiming to leverage the capabilities of both foundational models.

Key Characteristics

  • Merged Architecture: Combines Equall/Saul-Base and MaziyarPanahi/Calme-7B-Instruct-v0.1.1 to create a new instruction-following model.
  • Merge Method: Utilizes slerp for a balanced integration of the source models' weights.
  • Parameter Count: A 7 billion parameter model, offering a balance between performance and computational efficiency.
  • Context Length: Supports a context window of 4096 tokens, suitable for a variety of conversational and text generation tasks.

Intended Use Cases

This model is designed for general instruction-following applications where a merged model's combined strengths are beneficial. Developers can explore its performance in tasks requiring robust text generation and understanding, building upon the characteristics inherited from its base models.