Equall/Saul-7B-Instruct-v1

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Feb 7, 2024License:mitArchitecture:Transformer0.1K Open Weights Warm

Equall/Saul-7B-Instruct-v1 is a 7 billion parameter instruction-tuned causal language model developed by Equall.ai in collaboration with CentraleSupelec, Sorbonne Université, Instituto Superior Técnico, and NOVA School of Law. Built upon the Mistral-7B architecture, this model is specifically tailored for legal domain applications. It excels at generation tasks within legal contexts, offering specialized capabilities for legal use cases.

Loading preview...

Equall/Saul-7B-Instruct-v1: A Legal Domain LLM

Equall/Saul-7B-Instruct-v1 is a 7 billion parameter instruction-tuned language model, developed by Equall.ai in collaboration with several academic institutions. This model is a continuation of the Mistral-7B pre-training, specifically adapted and optimized for the legal domain.

Key Capabilities

  • Legal Text Generation: Designed for generating content relevant to legal use cases.
  • Instruction Following: Instruction-tuned to respond to user queries effectively within its specialized domain.
  • Mistral-7B Base: Leverages the robust architecture of Mistral-7B, providing a strong foundation for its specialized performance.

Use Cases

This model is intended for applications requiring language generation in legal contexts. Developers can integrate it into systems for tasks such as drafting legal documents, summarizing legal texts, or answering legal questions, where its domain-specific training provides an advantage.

Limitations

As an LLM, it shares inherent limitations, including the potential for inaccurate or nonsensical outputs. Being a 7B parameter model, its performance may not match that of significantly larger models (e.g., 70B variants).

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p