ShengbinYue/LawLLM-7B
LawLLM-7B is a 7.6 billion parameter large language model developed by Shengbin Yue and Fudan University's DISC Lab, based on the Qwen2.5-7B-Instruct architecture. This model is specifically fine-tuned for the Chinese legal domain, providing comprehensive intelligent legal services. It features a notable context length of 131072 tokens, making it suitable for processing extensive legal texts and complex queries.
Loading preview...
Overview
LawLLM-7B is a 7.6 billion parameter large language model, developed by Shengbin Yue and the Data Intelligence and Social Computing Lab (DISC) of Fudan University. It is built upon the Qwen2.5-7B-Instruct base model and is specialized for the Chinese legal domain. The model aims to provide comprehensive intelligent legal services, leveraging its extensive training in legal contexts.
Key Capabilities
- Chinese Legal Specialization: Fine-tuned specifically for legal tasks and queries within the Chinese legal system.
- Large Context Window: Features a context length of 131072 tokens, enabling it to handle long and complex legal documents and discussions.
- Legal Assistant: Designed to function as a legal assistant, capable of answering legal questions and providing insights.
Good For
- Legal Research: Assisting with inquiries related to Chinese law, such as sentencing guidelines for specific crimes.
- Intelligent Legal Services: Developing applications that require deep understanding and generation of Chinese legal text.
- Academic and Research: Serving as a foundation for further research and development in legal AI, particularly for Chinese legal systems. The model's development is detailed in its technical report and a related paper.