What Do You MEME? Generating Explanations for Visual Semantic Role Labelling in Memes
Proceedings of the 37th AAAI Conference on Artificial Intelligence, AAAI 2023
Memes are powerful means for effective communication on social media. Their effortless amalgamation of viral visuals and compelling messages can have far-reaching implications with proper marketing. Previous research on memes has primarily focused on characterizing their affective spectrum and detecting whether the meme's message insinuates any intended harm, such as hate, offense, racism, etc. However, memes often use abstraction, which can be elusive. Here, we introduce a novel task - EXCLAIM, generating explanations for visual semantic role labeling in memes. To this end, we curate ExHVV, a novel dataset that offers natural language explanations of connotative roles for three types of entities - heroes, villains, and victims, encompassing 4,680 entities present in 3K memes. We also benchmark ExHVV with several strong unimodal and multimodal baselines. Moreover, we posit LUMEN, a novel multimodal, multi-task learning framework that endeavors to address EXCLAIM optimally by jointly learning to predict the correct semantic roles and correspondingly to generate suitable natural language explanations. LUMEN distinctly outperforms the best baseline across 18 standard natural language generation evaluation metrics. Our systematic evaluation and analyses demonstrate that characteristic multimodal cues required for adjudicating semantic roles are also helpful for generating suitable explanations.
ML: Multimodal Learning, CV: Language and Vision, CV: Multi-modal Vision, APP: Humanities & Computational Social Science, ML: Multi-Class/Multi-Label Learning & Extreme Classification, ML: Transfer, Domain Adaptation, Multi-Task Learning, PEAI: Societal Impact of AI, SNLP: Generation
Sharma, S., Agarwal, S., Suresh, T., Nakov, P., Akhtar, M. S. and Chakraborty, T. (2023) “What Do You MEME? Generating Explanations for Visual Semantic Role Labelling in Memes”, Proceedings of the AAAI Conference on Artificial Intelligence, 37(8), pp. 9763-9771, June 2023. doi: 10.1609/aaai.v37i8.26166.