Detecting and Understanding Harmful Memes: A Survey

Document Type

Conference Proceeding

Publication Title

IJCAI International Joint Conference on Artificial Intelligence

Abstract

The automatic identification of harmful content online is of major concern for social media platforms, policymakers, and society. Researchers have studied textual, visual, and audio content, but typically in isolation. Yet, harmful content often combines multiple modalities, as in the case of memes. With this in mind, here we offer a comprehensive survey with a focus on harmful memes. Based on a systematic analysis of recent literature, we first propose a new typology of harmful memes, and then we highlight and summarize the relevant state of the art. One interesting finding is that many types of harmful memes are not really studied, e.g., such featuring self-harm and extremism, partly due to the lack of suitable datasets. We further find that existing datasets mostly capture multi-class scenarios, which are not inclusive of the affective spectrum that memes can represent. Another observation is that memes can propagate globally through repackaging in different languages and that they can also be multilingual, blending different cultures. We conclude by highlighting several challenges related to multimodal semiotics, technological constraints, and non-trivial social engagement, and we present several open-ended aspects such as delineating online harm and empirically examining related frameworks and assistive interventions, which we believe will motivate and drive future research. © 2022 International Joint Conferences on Artificial Intelligence. All rights reserved.

First Page

5597

Last Page

5606

DOI

10.24963/ijcai.2022/781

Publication Date

7-2022

Keywords

Artificial intelligence, Automation, Blending

Comments

IR Deposit conditions: non-described

Share

COinS