Detecting and Understanding Harmful Memes: A Survey
IJCAI International Joint Conference on Artificial Intelligence
The automatic identification of harmful content online is of major concern for social media platforms, policymakers, and society. Researchers have studied textual, visual, and audio content, but typically in isolation. Yet, harmful content often combines multiple modalities, as in the case of memes. With this in mind, here we offer a comprehensive survey with a focus on harmful memes. Based on a systematic analysis of recent literature, we first propose a new typology of harmful memes, and then we highlight and summarize the relevant state of the art. One interesting finding is that many types of harmful memes are not really studied, e.g., such featuring self-harm and extremism, partly due to the lack of suitable datasets. We further find that existing datasets mostly capture multi-class scenarios, which are not inclusive of the affective spectrum that memes can represent. Another observation is that memes can propagate globally through repackaging in different languages and that they can also be multilingual, blending different cultures. We conclude by highlighting several challenges related to multimodal semiotics, technological constraints, and non-trivial social engagement, and we present several open-ended aspects such as delineating online harm and empirically examining related frameworks and assistive interventions, which we believe will motivate and drive future research. © 2022 International Joint Conferences on Artificial Intelligence. All rights reserved.
Artificial intelligence, Automation, Blending
S. Sharma et al, "Detecting and Understanding Harmful Memes: A Survey", in Intl. Joint Conference on Artificial Intelligence (IJCAI 2022), July 2022, Vienna, pp. 5597-5606, doi: 10.24963/ijcai.2022/781