YIFEN0902/llama-3.1-8b-therapy-finetuned
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Jan 17, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
YIFEN0902/llama-3.1-8b-therapy-finetuned is an 8 billion parameter Llama 3.1-based causal language model developed by YIFEN0902, fine-tuned for therapy-related applications. This model leverages the Llama 3.1 architecture with a 32,768 token context length, optimized using Unsloth and Huggingface's TRL library for faster training. It is specifically designed to excel in conversational contexts requiring therapeutic understanding and responses.
Loading preview...