YIFEN0902/llama-3.1-8b-therapy-finetuned

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Jan 17, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

YIFEN0902/llama-3.1-8b-therapy-finetuned is an 8 billion parameter Llama 3.1-based causal language model developed by YIFEN0902, fine-tuned for therapy-related applications. This model leverages the Llama 3.1 architecture with a 32,768 token context length, optimized using Unsloth and Huggingface's TRL library for faster training. It is specifically designed to excel in conversational contexts requiring therapeutic understanding and responses.

Loading preview...

Model Overview

YIFEN0902/llama-3.1-8b-therapy-finetuned is an 8 billion parameter language model built upon the Llama 3.1 architecture, developed by YIFEN0902. This model was fine-tuned using unsloth/meta-llama-3.1-8b-instruct-bnb-4bit as its base, leveraging the Unsloth library and Huggingface's TRL for accelerated training, reportedly achieving 2x faster training speeds. It operates under an Apache-2.0 license and supports a substantial context length of 32,768 tokens.

Key Capabilities

  • Therapy-Oriented Fine-tuning: Specifically adapted for applications requiring therapeutic conversational understanding and generation.
  • Llama 3.1 Foundation: Benefits from the robust capabilities and performance of the Llama 3.1 base model.
  • Optimized Training: Utilizes Unsloth for efficient and faster fine-tuning processes.

Good For

  • Developing AI assistants for mental wellness support.
  • Creating conversational agents for therapeutic dialogue simulation.
  • Applications requiring empathetic and context-aware responses in sensitive discussions.