What Do You MEME? Generating Explanations for Visual Semantic Role Labelling in Memes

Document Type

Conference Proceeding

Publication Title

Proceedings of the 37th AAAI Conference on Artificial Intelligence, AAAI 2023


Memes are powerful means for effective communication on social media. Their effortless amalgamation of viral visuals and compelling messages can have far-reaching implications with proper marketing. Previous research on memes has primarily focused on characterizing their affective spectrum and detecting whether the meme's message insinuates any intended harm, such as hate, offense, racism, etc. However, memes often use abstraction, which can be elusive. Here, we introduce a novel task - EXCLAIM, generating explanations for visual semantic role labeling in memes. To this end, we curate ExHVV, a novel dataset that offers natural language explanations of connotative roles for three types of entities - heroes, villains, and victims, encompassing 4,680 entities present in 3K memes. We also benchmark ExHVV with several strong unimodal and multimodal baselines. Moreover, we posit LUMEN, a novel multimodal, multi-task learning framework that endeavors to address EXCLAIM optimally by jointly learning to predict the correct semantic roles and correspondingly to generate suitable natural language explanations. LUMEN distinctly outperforms the best baseline across 18 standard natural language generation evaluation metrics. Our systematic evaluation and analyses demonstrate that characteristic multimodal cues required for adjudicating semantic roles are also helpful for generating suitable explanations.

First Page


Last Page




Publication Date



ML: Multimodal Learning, CV: Language and Vision, CV: Multi-modal Vision, APP: Humanities & Computational Social Science, ML: Multi-Class/Multi-Label Learning & Extreme Classification, ML: Transfer, Domain Adaptation, Multi-Task Learning, PEAI: Societal Impact of AI, SNLP: Generation


Copyright by AAAI

IR conditions described in AAAI Open Journal System About Page

Archived thanks to AAAI

Uploaded 28 November 2023

This document is currently not available here.