WesleySantos/depression-llama-2-7b
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kArchitecture:Transformer Cold

The WesleySantos/depression-llama-2-7b model is a Llama 2-based language model fine-tuned using 8-bit quantization with bitsandbytes. This model is specifically adapted for tasks related to depression, leveraging its base architecture for specialized applications. Its training procedure involved PEFT 0.6.0.dev0, focusing on efficient fine-tuning.

Loading preview...