ECNU-SEA/SEA-E

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jun 24, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

The ECNU-SEA/SEA-E model is a 7 billion parameter instruction-tuned causal language model developed by ECNU-SEA, built upon the Mistral-7B-Instruct-v0.2 backbone. It is specifically fine-tuned on a high-quality peer review instruction dataset to provide comprehensive and insightful review feedback for academic papers. This model excels at generating detailed critiques for submitted papers, particularly within the machine learning domain, and has a context length of 4096 tokens.

Loading preview...

ECNU-SEA/SEA-E: Automated Peer Reviewing Model

ECNU-SEA/SEA-E is a 7 billion parameter instruction-tuned model, leveraging the Mistral-7B-Instruct-v0.2 architecture. It has been specifically fine-tuned using a standardized, high-quality peer review instruction dataset to generate comprehensive and insightful feedback for academic papers. This model is part of the Paper SEA (Standardization, Evaluation, and Analysis) project, which was accepted by EMNLP 2024.

Key Capabilities

  • Automated Peer Review: Provides detailed and constructive review feedback for submitted research papers.
  • Specialized Domain: Optimized for generating reviews within the field of machine learning.
  • Instruction-Tuned: Derived from supervised fine-tuning (SFT) on a dedicated peer review dataset.

Good For

  • Authors: Obtaining informative reviews to refine and improve their papers before submission.
  • Researchers: Exploring automated methods for academic peer review within machine learning.

Limitations

  • Domain Specificity: Currently applicable only within the machine learning domain; does not guarantee insightful comments for other disciplines.
  • Informative Reviews Only: Intended to provide feedback for paper polishing, not for direct acceptance/rejection recommendations.
  • Non-Commercial Use: Commercial use of the model is not permitted.