INGENIOUS situational awareness algorithms for UAVs
UAV (Unmanned Aerial Vehicles) have shown to be a valuable instrument to inspect indoor spaces, such as damaged buildings. The images collected by these platforms are useful to understand the environment, detect relevant details such as the presence of possible hazards and localize trapped people before First Responders (FR) enter inside. So far, this useful information needs to be extracted by human operators from long image and video sequences, requesting an additional effort for the rescue team on the field. In this regard, an automated procedure to automatically extract the useful information would make the use of UAV more effective, giving tangible support to FR’s prompt action.
In the INGENIOUS Project, ITC is committed to designing situational awareness algorithms for UAVs using cutting-edge artificial intelligent (AI) algorithms. The images captured by camera sensors on the UAV are processed by deep learning algorithms to provide high-level semantic map and autonomously interpret the scenes selecting useful information for First Responders. A semantic map (Figure 1) is a visual representation of the scene where each pixel is classified into one of a pre-defined set of object classes. From a semantic map, we can easily extract the information on the presence of certain objects and allow First Responders to focus on specific classes they are interested in.
Emergency scenarios are usually completely dark indoor environments as electric supply is usually not working (Figure 2). Although the UAV is equipped with strip lights to illuminate its surroundings, the light is still not homogenous and prevent an optimal interpretation of the scene. Many semantic segmentation algorithms have been already presented in the scientific literature, but none of them is working in dark environments where shadows hinder the deep learning algorithm from outputting reliable semantic predictions. To address this specific scenario, ITC is working on an innovative neural network architecture to mitigate the negative influences of light changes and deliver robust low-light indoor scene understanding.
To date, the work has focused on “intact” environments using both real and synthetic data as it was easier to generate the datasets to train the neural network. The tests have shown very promising results (Figure 3).
The work will be now extended to destroyed environments using real datasets provided by the project stakeholders to better address the goals of the Project. Further work will be also done to include the detection of victims over the debris making even more meaningful the automated interpretation of the scene.