NEU-HAI/mental-alpaca

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Aug 21, 2023License:cc-by-nc-4.0Architecture:Transformer0.0K Open Weights Cold

NEU-HAI/mental-alpaca is a 7 billion parameter Alpaca-based large language model developed by the Northeastern University Human-Centered AI Lab. Fine-tuned on 4 high-quality datasets (Dreaddit, DepSeverity, SDCNL, CCRS-Suicide), this model specializes in mental health prediction using online text data. It is intended for research purposes in English, leveraging a 4096-token context length for text generation tasks.

Loading preview...

Overview

NEU-HAI/mental-alpaca is a 7 billion parameter language model developed by the Northeastern University Human-Centered AI Lab. It is fine-tuned from an Alpaca base model, which itself is based on Llama-2-7b, for the specific task of mental health prediction from online text data. The model leverages a 4096-token context length and is designed for research applications.

Key Capabilities

  • Mental Health Prediction: Specialized in analyzing online text for mental health indicators.
  • Fine-tuned Datasets: Trained on Dreaddit, DepSeverity, SDCNL, and CCRS-Suicide datasets to enhance its predictive accuracy.
  • English Language Support: Primarily developed and intended for use with English text.

Intended Use Cases

  • Research Purposes: Designed for academic and research exploration in mental health prediction.
  • Text Analysis: Suitable for analyzing online textual data to identify patterns related to mental health.

Limitations and Considerations

  • Research Use Only: Not intended for clinical diagnosis or direct patient care.
  • Compliance: Use must adhere to the restrictions and licenses of the original Stanford Alpaca and Llama-2-7b projects.
  • Bias and Risks: Inherits biases and limitations from its base models, as detailed in the respective project documentations. Further details on the fine-tuning process and prompts are available in the associated research paper.