samhog/psychology-alpaca-merged

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kArchitecture:Transformer0.0K Cold

The samhog/psychology-alpaca-merged is a 7 billion parameter LLaMA-based language model developed by Samuel Höglund and Josef Khedri. It is fine-tuned on 10,000 psychology-related prompts and answers generated by ChatGPT, giving it specialized knowledge in the field of psychology. This model is designed to perform better than its base LLaMA parent for psychology-specific tasks, making it suitable for applications requiring psychological domain understanding.

Loading preview...

Psychology Alpaca Merged: A Specialized LLaMA-7B Model

The samhog/psychology-alpaca-merged is a 7 billion parameter language model built upon the LLaMA architecture. Developed by Samuel Höglund and Josef Khedri, this model has been specifically fine-tuned using a dataset of 10,000 psychology-related prompts and their corresponding answers, which were generated by ChatGPT.

Key Capabilities

  • Psychology Domain Expertise: The model demonstrates specialized knowledge in the field of psychology, having been trained exclusively on relevant data.
  • Improved Performance: It generally outperforms its base LLaMA parent model on tasks requiring psychological understanding.
  • Research Foundation: This model originated from a thesis project exploring machine learning and psychology, specifically comparing reinforcement learning from human feedback versus AI feedback.

Good For

  • Psychology-focused NLP applications: Ideal for tasks such as generating psychological insights, answering psychology-related questions, or assisting in research within the domain.
  • Base for further fine-tuning: Can serve as a strong foundation for additional fine-tuning, particularly for projects involving reinforcement learning in psychological contexts.