lzw1008/Emollama-7b
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jan 21, 2024License:mitArchitecture:Transformer0.0K Open Weights Cold

Emollama-7b, developed by lzw1008 as part of the EmoLLMs project, is a 7 billion parameter instruction-following large language model fine-tuned from Meta's LLaMA2-7B. It specializes in comprehensive affective analysis, including sentiment polarity, categorical emotions, sentiment strength, and emotion intensity. The model is trained on the full AAID instruction tuning data and supports a 4096 token context length.

Loading preview...