nayaksomkar/Qwen3-0.6B-PsychLM

TEXT GENERATIONConcurrency Cost:1Model Size:0.8BQuant:BF16Ctx Length:32kPublished:Apr 24, 2026Architecture:Transformer Cold

The nayaksomkar/Qwen3-0.6B-PsychLM is a 0.8 billion parameter Qwen3-based language model, fine-tuned by nayaksomkar, specifically optimized for psychology and mental health conversational tasks. It leverages a curated dataset of psychology-focused user/assistant interactions, making it suitable for applications requiring therapeutic or psychological understanding. This model excels in generating responses relevant to mental health chatbots and therapy-style AI assistants.

Loading preview...

Overview

This model, nayaksomkar/Qwen3-0.6B-PsychLM, is a 0.8 billion parameter Qwen3-based language model fine-tuned on the Psychology Quality Dataset. This dataset, curated by nayaksomkar, combines multiple psychology and mental health datasets into a standardized user/assistant conversational format.

Key Capabilities

  • Specialized Domain Understanding: Optimized for conversations within psychology and mental health.
  • Conversational Format: Trained on a user/assistant dialogue structure, ideal for interactive applications.
  • Data Quality: The training data was processed to remove duplicates and harmful content, focusing on therapeutic and supportive responses.

Good For

  • Developing mental health chatbots.
  • Creating therapy-style AI assistants.
  • Supporting psychology NLP research requiring domain-specific language models.

Important Note

This model is not a substitute for professional medical advice. It should be used with appropriate safety mechanisms in production environments.