arcee-ai/Saul-Legal-Calme-Instruct

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Mar 11, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

Saul-Legal-Calme-Instruct is a 7 billion parameter instruction-tuned language model created by arcee-ai, formed by merging MaziyarPanahi/Calme-7B-Instruct-v0.1.1 and Equall/Saul-Instruct-v1. This model is designed for general instruction following, leveraging the combined strengths of its constituent models. It offers a 4096-token context length, making it suitable for a variety of natural language processing tasks.

Loading preview...

Saul-Legal-Calme-Instruct Overview

Saul-Legal-Calme-Instruct is a 7 billion parameter instruction-tuned language model developed by arcee-ai. It was created using mergekit by combining two distinct models: MaziyarPanahi/Calme-7B-Instruct-v0.1.1 and Equall/Saul-Instruct-v1. This merging approach aims to synthesize the capabilities of both base models into a single, more versatile instruction-following model.

Key Characteristics

  • Merged Architecture: Utilizes a slerp merge method across specific layers (self_attn and mlp) to blend the characteristics of its two source models.
  • Parameter Count: A 7 billion parameter model, offering a balance between performance and computational efficiency.
  • Context Length: Supports a context window of 4096 tokens, enabling it to handle moderately long inputs and generate coherent responses.

Use Cases

This model is suitable for general instruction-following tasks where a merged model's combined strengths are beneficial. Its design suggests applicability in scenarios requiring robust natural language understanding and generation based on user prompts.