GELI: Adapted LLM for Social Conversations
dondongwonlee/GELI is a 7 billion parameter language model, fine-tuned from Meta's Llama-2-7b-chat-hf. This model was developed as part of the research presented at EMNLP 2024 (Oral) in the paper "Global Reward to Local Rewards: Multimodal-Guided Decomposition for Improving Dialogue Agents." Its primary focus is on enhancing social conversations by incorporating multimodal guidance, specifically through facial expressions.
Key Capabilities
- Social Conversation Enhancement: Adapted to improve dialogue agents in social conversational contexts.
- Multimodal Integration: Utilizes multimodal-guided decomposition, suggesting an ability to process or be influenced by non-textual cues like facial expressions.
- Research-Oriented: Developed for academic and scientific research, particularly in the domain of human-computer interaction and advanced dialogue systems.
Training and Licensing
The model was trained on the CANDOR dataset provided by BetterUp, Inc., and is subject to their specific licensing terms, which restrict usage to research purposes only and prohibit redistribution of the dataset or identification of individuals. As an adaptation of Llama 2, it also adheres to Meta's LLAMA 2 Community License Agreement, retaining the original license terms and intended use guidelines for Llama 2.