Crisis-DIAS: Towards Multimodal Damage Analysis - Deployment, Challenges, and Assessment

Published in The Thirty-Fourth AAAI Conference on Artificial Intelligence: AI for Social Impact. AAAI 2020., 2020

Recommended citation: Mansi Agarwal*, Maitree Leekha*, Ramit Sawhney, Rajiv Ratn Shah. The Thirty-Fourth AAAI Conference on Artificial Intelligence: AI for Social Impact. AAAI 2020.

[PDF] [DOI]

Abstract

In times of a disaster, the information available on social media can be useful for several humanitarian tasks as disseminating messages on social media is quick and easily accessible. Disaster damage assessment is inherently multi-modal, yet most existing work on damage identification has focused solely on building generic classification models that rely exclusively on text or image analysis of online social media sessions (e.g., posts). Despite their empirical success, these efforts ignore the multi-modal information manifested in social media data. Conventionally, when information from various modalities is presented together, it often exhibits complementary insights about the application domain and facilitates better learning performance. In this work, we present Crisis-DIAS, a multi-modal sequential damage identification, and severity detection system. We aim to support disaster management and aid in planning by analyzing and exploiting the impact of linguistic cues on a unimodal visual system. Through extensive qualitative, quantitative and theoretical analysis on a real-world multi-modal social media dataset, we show that the Crisis-DIAS framework is superior to the state-of-the-art damage assessment models in terms of bias, responsiveness, computational efficiency, and assessment performance.