MentaLLaMA-chat-7B: Interpretable Mental Health Analysis
MentaLLaMA-chat-7B is a 7 billion parameter instruction-following model, part of the MentaLLaMA project, which focuses on interpretable mental health analysis. Developed by klyang, this model is fine-tuned from Meta's LLaMA2-chat-7B using the comprehensive IMHI instruction tuning dataset, comprising 75,000 high-quality natural language instructions.
Key Capabilities
- Interpretable Mental Health Analysis: Designed to perform complex mental health analysis for various conditions and provide reliable explanations for its predictions.
- Instruction Following: Enhanced with instruction-following capabilities through fine-tuning on the IMHI dataset.
- Performance: Achieves performance comparable to state-of-the-art discriminative methods on the IMHI benchmark, which includes 20,000 test samples.
- Ethical Considerations: While showing promising results, the developers stress that all predictions and explanations are for non-clinical research only, and professional medical advice should always be sought for help-seekers.
Other Models in the MentaLLaMA Series
The MentaLLaMA project also includes:
- MentaLLaMA-chat-13B: A larger 13B parameter version based on LLaMA2-chat-13B, covering 10 mental health analysis tasks.
- MentalBART: A more lightweight model based on BART-large, focused on completion-based interpretable mental health analysis without instruction-following.
- MentalT5: Another lightweight model based on T5-large, also for completion-based interpretable mental health analysis without instruction-following.