ArkMaster123/qwen2.5-7b-therapist-v2

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Apr 25, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

ArkMaster123/qwen2.5-7b-therapist-v2 is a 7.6 billion parameter causal language model, fine-tuned from Qwen/Qwen2.5-7B-Instruct. This model is specifically optimized for generating therapeutic conversations, leveraging a LoRA fine-tuning approach on a dataset of 250 therapist conversations. It is designed to assist in generating supportive and empathetic responses within a conversational context, with a context length of 32768 tokens.

Loading preview...

Model Overview

ArkMaster123/qwen2.5-7b-therapist-v2 is a specialized 7.6 billion parameter language model, fine-tuned from the robust Qwen/Qwen2.5-7B-Instruct base. Its primary function is to engage in and generate therapeutic conversations, providing supportive and empathetic dialogue.

Key Capabilities

  • Therapeutic Conversation Generation: Specifically trained on a dataset of 250 therapist conversations to produce relevant and helpful responses in a therapeutic context.
  • Qwen2.5 Architecture: Benefits from the strong foundational capabilities of the Qwen2.5-7B-Instruct model.
  • Efficient Fine-tuning: Utilizes LoRA (Rank 64) for efficient adaptation, with the adapter merged into the full model for streamlined deployment.
  • Large Context Window: Supports a context length of 32768 tokens, allowing for extended and coherent conversational turns.

Training Details

The model underwent a LoRA fine-tuning process with a learning rate of 2e-4 over 3 epochs, using bf16 training and gradient checkpointing. The training was conducted on a Modal A100 GPU, completing in approximately 10 minutes. It's important to note that this model is not a substitute for professional mental health services and should be used responsibly.