lzw1008/ConspEmoLLM-7b

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Mar 29, 2024License:mitArchitecture:Transformer Open Weights Cold

ConspEmoLLM-7b is a 7 billion parameter large language model developed by lzw1008, specifically fine-tuned for conspiracy theory detection. This model integrates emotion-based analysis to enhance its ability to identify and classify conspiracy-related content. It is designed for applications requiring nuanced understanding of text to discern conspiratorial narratives.

Loading preview...

Overview

ConspEmoLLM-7b is a 7 billion parameter large language model developed by lzw1008, designed for the specialized task of conspiracy theory detection. This model distinguishes itself by incorporating an emotion-based approach to analyze text, which is crucial for identifying the underlying sentiments and rhetorical patterns often present in conspiratorial content. Its development is detailed in the associated research paper, "ConspEmoLLM: Conspiracy Theory Detection Using an Emotion-Based Large Language Model" (arXiv:2403.06765).

Key Capabilities

  • Conspiracy Theory Detection: Specialized fine-tuning enables the model to identify and classify text as conspiratorial.
  • Emotion-Based Analysis: Leverages emotional cues within text to improve detection accuracy.
  • Text Understanding: Processes and interprets complex narratives to discern subtle indicators of conspiracy theories.

Good For

  • Content Moderation: Assisting platforms in identifying and flagging potentially harmful conspiratorial content.
  • Research: Supporting academic and journalistic efforts to study the spread and characteristics of conspiracy theories.
  • Information Analysis: Providing tools for analyzing large volumes of text for specific narrative patterns related to conspiracies.