The Consequences of Natural Disasters

Abstract

Natural disasters are unavoidable, in the sense that we’ve very low and no control over their occurrences, and the outcome they cause in the life of mankind. Very often they cause a lot of damage, which affects life on the earth by causing material damage as well as the loss of lives. Over the years a lot of practices have been researched and applied to predict their occurrences well before time to handle such a situation. The motive behind every such practice is the same to reduce the loss or damage they cause as much as possible. We propose a deep convolutional neural network method as manual inspection is not efficient since it takes a considerable amount of resources and time for recognition of different disastrous events which includes localization and detailed assessment of damaged buildings, Fire and flood Detection which is of utmost importance to guide response operations and recovery tasks. Previously different Convolutional Neural Network techniques were applied for recognition of Disastrous events especially building damages caused by earthquake or negligence in infra-structure, the system is trained using large samples of images which will increase the quality and accuracy of classification results.

Introduction

Disasters are a different kind of hazards, conditions of vulnerability and insufficient capacity or measures to reduce the negative consequences of risk. A hazard becomes a disaster when it coincides with a vulnerable situation when societies or communities are unable to manage with it with their own resources and capacities [1]. In 2010,373 disasters lead to the deaths of 226,000 and affected 207,000 persons. Over the last decade 2000-2010, 400 disasters accounted for 98,000 deaths and 226,000 million affected annually. In total 1,077,683 people lost their lives while 2.4 billion were affected by disasters during the decade as shown in Figure 1 [1].

Figure 1: Trend of reported Disasters, 1975-2010 

From 2000 to 2010, economic damage as a result of disasters came to about US$ 1 trillion; in 2010 alone, the total estimated damage was US$109 billion. Damage in the past two decades is significantly greater than in earlier decades. This could depict greater exposure, or better reporting, or both. Rich countries suffer greater absolute damage as the value of their infrastructure is higher as shown in figure 2  [1]

Figure2: Global economic damages from hazards, 1970-2010[1]

Number of techniques were applied in past for the detection and recognition of building damages caused by any natural or man-made disasters but in these techniques offline post-processing and analyzing are required to extract useful information to overcome this shortcoming deep convolutional network issued to have a real-time disaster detection which can be effectively used in early emergency response and disaster management applications. Literature Review Various methods are reported for the automatic image classification of building damages. These motives to relate the features extracted from the imagery with damage evidence. Such methods are usually closely related to the platform used for their acquisition, exploiting their intrinsic characteristics such as the viewing angle and resolution, among others  [2]. The researcher proposed a technique for disaster image retrieval task challenge on Multimedia and Satellite. For visual information, convolutional neural network features extracted with two different models pre-trained on ImageNet are used. A fusion technique is used to jointly utilize visual and extra information available within the sort of meta-data for the retrieval of disaster images from social media. The typical precision of three different runs with visual information only, meta-data and combination of meta-data and visual information are 95.73%, 18.23%, and 92.55%, respectively [3].

The researcher proposed a technique in which rotary-wing octocopter micro air vehicle (MAV) system utilized to scan buildings for inspection and monitoring purposes with a high-resolution camera. The MAV has been equipped with a microcontroller-based control system and different sensors for navigation and flight stabilization. Pictures were taken at a high speed and frequency and stored onboard before being downloaded once the MAV completed a mission. Pictures taken have then been stitched together to get a full 2D image at a resolution allowing damages and cracking to be observed within the millimeter range. During a follow-on step an image processing software has been developed that permits cracking patterns to be specifically filtered out, which can be further analyzed from a statistical pattern recognition point of view during a future step [4].

The author proposed a Robust detection of buildings which was an important for automated aerial image interpretation problems. Automatic detection of buildings enables the creation of maps, detecting changes, and monitoring urbanization. Due to the complex and uncontrolled appearance of the scene, intelligent fusion of various methods gives better results. The author presented a novel approach for building detection using multiple cues. Besides, this the author used the edge and shadow information for building detection [5].The author proposed a technique that utilized the Oblique aerial images that offer a view of both building roofs and façades and thus have been recognized as a potential source to detect severe building damages caused by destructive disaster events like earthquakes.

Therefore, these images represent an important source of information for first responders or other stakeholders involved in the post-disaster response process. Several automated methods supported supervised learning have already been demonstrated for damage detection using oblique airborne images. However, they often didn’t generalize well when data from new unseen sites need to be managed, hampering their practical use. Reasons for this inadequacy include image and scene features, though the foremost prominent relates to the image features getting used for training the classifier. Recently features based on deep learning approaches, such as convolutional neural networks (CNNs), have been shown to be more effective than conventional hand-crafted features, and have become the state-of-the-art in many domains, including remote sensing [6]. The author proposed three multiresolution feature fusion techniques for detecting building damages.

The dataset used in three multi-Resolution approaches formed by three sets of images and found the technique which preserves feature information from the immediate layer at later stage of the network perform better than the other two techniques  [7].

Methodology

The proposed architecture of CNN for the classification of different disastrous events consist of two basic modules: 

  1. Context module followed by
  2.  resolution-specific module.

Figure 3: Basic Building Blocks of CNN Architecture

Both modules are built stacking basic convolutional sets. These are composed of a convolution, batch normalization and ReLU as shown in figure 4.Batch Normalization is basically used to speed-up training of convolutional neural network and to reduce the sensitivity to network initialization.

Figure 4: Basic group of convolutions used to build the context and resolution specific module

The context module was built by stacking 19 CBRs shown in figure 5 with an increasing number of filters and a dilation factor. For our tests, a lower number of CBRs would make the network weaker while deeper networks would give no improvements and slow the network runtime (increasing the risk of overfitting). The growing number of filters is commonly used in CNN approaches, following the general assumption that more filters are needed to represent more complex features [9]. The increasing dilation factor in the context module is aimed at gradually capturing feature representations over a larger context area.

Figure 5: Structure of the Context module

The increase in the dilation factor can create artifacts on the resulting feature maps, due to the gaps generated by the dilated kernel. To attenuate this drawback, the dilation increase in the context module is compensated in the resolution-specific module as shown in figure 6 with a gradual reduction of the dilation value and the removal of residual connections from the basic CBR blocks, this also allows to re-capture the more local features which might be lost due to the increasing dilations in the context module.

Figure 6: Structure of the Resolution-Specific module

For classification, Global average pooling is used which maps the feature map size to the number of classes and the sigmoid function is used for activation as it is a binary classification problem. Datasets The dataset consists of a set of generic satellite image samples. The satellite images cover five different geographical locations in Italy, Ecuador and Haiti (Table 1). The satellite imagery was collected with WorldView3 and GeoEye 1.These data are pan-sharpened and have a variable resolution between 0.4 and 0.6m. The generic satellite images samples are taken from a freely available benchmark dataset: NWPU-RESISC45 .This benchmark dataset contains 45 classes with 700 satellite image samples per class. From these, fourteen classes were selected and divided into two broader classes, built and non-built . Rather than considering the entire 31500 samples, only fourteen classes are considered (9800) to scale back the computational cost of the approach.

Figure7: Experiment and Results

The presented results indicate an improvement in the satellite image classification of building damages thanks to the use of different training samples from different spatial resolutions. For both sets of experiments, the accuracy, recall, and precision were calculated for the validation image datasets described before and the following equations were considered:\nWhere, in Equations (1)–(3), TP are the true positives, FN are the false negatives, and FP are the False positives. The dataset is divided into two parts 70% of images are used for training and 30% is used for testing. The accuracy and loss function is calculated by training the model.

Figure 8: Training and Testing

Accuracy are 98% and 94% respectively Conclusion This report assessed the use of generic satellite with different resolutions within a CNN approach, to perform the satellite image classification of building damages. The combined use of different resolutions in training of CNN, improved the accuracy of satellite image classification of building damages which are depicted in results. The successful use of satellite image samples should also be extended to other image classification problems with more classes which will further increase the overall accuracy of complex scenarios.

References

  •  "Economic Losses poverty and Disasters," UNISDR.
  • T. Balz and . M. Liao , "Building-damage detection using post-seismic high-resolution SAR satellite data," International Journal of Remote Sensing , vol. 31, pp. 3369-3391, 2010.
  • K. He, X. Zhang, S. Ren and J. Sun, "Deep residual learning for image recognition," in IEEE Conference on Computer vision and pattern Recognition(CVPR), USA, 2016.
  • C. Eschmann, C. M. Kuo, C. H. Kuo and C. Boller, "Unmanned Aircraft Systems for Remote Building Inspection and Monitoring," pp. 1-8, 2012.
  • B. Sırmac¸ek and C. U¨ nsalan, "Building Detection from Aerial Images using Invariant Color Features and Shadow Information," IEEE, pp. 1-5, 2008.
  •  A. Vetrivel, M. Gerke, N. Kerle, F. Nex and G. Vosselman, "Disaster damage detection through synergistic use of deep learning and 3D point cloud features derived from very high resolution oblique aerial images, and multiple-kernel-learning," ISPRS Journal of Photogrammetry and Remote Sensing, vol. 140, pp. 45-59, 2018.
  • D. Duarte, F. Nex, N. Kerle and G. Vosselman, "SATELLITE IMAGE CLASSIFICATION OF BUILDING DAMAGES USING AIRBORNE AND SATELLITE IMAGE SAMPLES IN A DEEP LEARNING APPROACH," ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences,, vol. IV, pp. 89-96, 2018.
  • R. Hamaguchi, A. Fujita, K. Nemoto, T. Imaizumi and S. Hikosaka, "Effective Use of Dilated Convolutions for Segmenting Small Object Instances in Remote Sensing Imagery," 2017.
  • K. H. He, X. Zhang, S. Ren and J. Sun, "Deep residual learning for image recognition," 2015. 
  • S. Ioffe and C. Szegedy, "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift," 2015.