AdaptLLM/law-LLM-13B
AdaptLLM/law-LLM-13B is a 13 billion parameter language model developed by AdaptLLM, based on LLaMA-1. It is continually pre-trained on domain-specific legal corpora using a reading comprehension method to enhance domain knowledge while preserving prompting ability. This model is specifically optimized for legal tasks and question answering within the law domain.
Loading preview...
Overview
AdaptLLM/law-LLM-13B is a 13 billion parameter model derived from LLaMA-1-13B, developed by AdaptLLM. It utilizes a novel method of continual pre-training on domain-specific corpora, transforming large-scale pre-training data into reading comprehension texts. This approach aims to enrich the model with specialized domain knowledge while mitigating the common issue of degraded prompting performance in question answering tasks that often accompanies domain adaptation.
Key Capabilities
- Domain-Specific Expertise: Enhanced knowledge in the legal domain through targeted pre-training.
- Reading Comprehension Method: Employs a unique technique to convert pre-training corpora into reading comprehension formats, improving prompting ability.
- Scalability: Demonstrates consistent effectiveness for larger models, building upon the success of its 7B parameter counterparts.
- Base Model: Serves as a base model for legal applications, with chat-optimized versions also available.
Good For
- Legal Question Answering: Excels at understanding and responding to queries within the law domain.
- Domain-Specific Research: Useful for tasks requiring deep legal knowledge and contextual understanding.
- Developing Specialized Applications: Provides a strong foundation for building applications focused on legal text analysis and information retrieval.