LIT#7 took place on October 1, 2020, during which the Multi-source Information Fusion Engine and Expert Reasoning and Data Exploitation (responsible partners: CERTH, CS, EXUS) components of INGENIOUS were tested. The specific use cases that were demonstrated were:

  1. Testing of internal data flow communication between Fusion Engine components: Data Generator and Kafka
  2. Testing of internal data flow communication between Fusion Engine components: Kafka – Persistence Service – DB
  3. Testing of alerting mechanism on FR health status: Running of multiple scenarios. Demonstrate and monitor alert probability.
Figure 1:  Testing of internal data flow communication between Fusion Engine components: Kafka – Persistence Service – DB
Figure 2: Ontologies First Responders & Operational Goals

During a separate session, the Worksite Operation Application (WOA) was demonstrated (under Task 4.4: Applications for enhancing the operational capacity of practitioners of the INGENIOUS project). The requirements tested included: information retrieval and display based on FR selection, a catalogue of the equipment that is currently available/used in the mission and providing victim identification overview, collection stats from the face recognition and triage apps.

Figure 3: Implementation Status of the WOA during LIT7

2nd Round of LITs


In May 2021 LIT#16 took place in the framework of the 2nd round of the INGENIOUS LITs. EXUS AI Labs in collaboration with CERTH demonstrated successfully the mutli-source information fusion engine, expert reasoning system and data exploitation and application for enhancing operational capacity, and presented the progress made in the development of the components since LIT#7. The Multi-Source Information Fusion Engine and Expert Reasoning System were tested for near real-time data processing and quality of data (across different dimensions), its ability to produce alerts from heterogeneous sensors (e.g., dehydration) and raise alarms for First Responders (FRs) in danger, communication to and from First Responders and attributes related to scalability and post processing capabilities. The components were demonstrated across a number of use cases.

Figure 1: Colour coded areas based on severity of hazard (gas sensor, temperature)

Two intelligence services were demonstrated. The smart alerting system which had been tested before and the Map Zone classification which was shown for the first time.

The smart alerting system uses deep learning algorithms (LSTMs) to monitor the vitals of FRs and when potential risks are identified, alerts are raised to inform operators to take preventive actions. The component was demonstrated using simulated data, send through Kafka with alerts sent through the Fusion engine to COP.

The other intelligence service that was shown, was the Map zone classification based on data from available sensors. Zones across the worksite are rated, based on readings from sensors and their classification (in terms of danger for the FRs) is visualised through the different colouring of the zones in the map. The system took as input the coordinates for each FR carrying the sensors at any given time, the temperature reading from the boots and the gas sensor reading. Danger gas and temperature zones were depicted as red/orange/yellow based on severity. A clustering algorithm was used to create affected areas based on neighbouring sensors.  

Figure 2 Expert Reasoning Workflow

During the LIT#16 CERTH demonstrated the Expert Reasoning Component and its integration with the Fusion Engine (FE). For this LIT the Expert Reasoning Component was enhanced with Knowledge Base Enrichment, more sophisticated reasoning rules, and enhanced alerts. This component aims to support the decision-making process of the First Responders (FRs) that are involved in a disaster. More precisely, it is a back-end service that receives information from the Fusion Engine about resources (e.g. uniform, boots etc.), measurements, and relations saves them in the Knowledge Base as interconnected Knowledge Graphs for facilitating decision support and early warnings generation and transfers the alerts to the COP via the FE. It was tested (Figure 2) in the following use cases and produced alerts with different severities and urgencies:

  1. Heatstroke rule: Detects FRs suffering from heatstroke (moderate or severe)
  2. Dehydration rule:  Detects FRs suffering from dehydration (moderate or severe)
  3. Complex rule: Combines the vitals from two different resources, uniform and boots, and creates a more complex reasoning rule for detecting a FR being in serious danger.
Figure 3 Worksite Operation App

At the end, the Worksite Operations application (Figure 3) was demonstrated in terms of integration with the Fusion Engine, visualisation of FRs information (location, status), visualisations of the availability and allocation of the equipment and lastly information about the weather and the terrain.

After the LIT feedback was collected from end users and the different components were evaluated and validated across the set of requirements. Further improvements were discussed with the need for field tests being identified.

Progress made between LIT#7 & LIT#16

In Table 1, it is depicted the progress between the two LITs (LIT#7 & LIT#16) regarding the Expert Reasoning. Until the first round of LITs, a basic implementation was done for capturing the knowledge in ontologies, integrating with the Fusion Engine and detecting alerts with basic rules. Until the second round of LITs, the Expert Reasoning tool was evolved with Knowledge Base Enrichment, more sophisticated reasoning rules, and enhanced alerts.

Component progress between LITs (LIT#7 & LIT#16) for Expert Reasoning
1st Round of LITs 2nd Round of LITs
Conceptualization of the Ontology/
Knowledge base Implementation
Knowledge base enrichment by adding Vehicles, K9 units, FR teams, more information about FRs (weight, height) etc., along with relationships between them.
Reasoning rules Implementation More Reasoning rules (Dehydration, Heatstroke)
Capture the sum of the relevant information available on the project Integration and handling of Alerts coming from the Boots module
Integration with the Fusion Engine Development of a “Complex” Reasoning rule that combines resources from different components (Uniform, Boots)
Creation of alert coming from reasoning rule Creation of more alerts with different severities, urgencies etc., depending on the result of the Expert Reasoning rules, which are then transferred to the COP via the Fusion Engine

Table 1: Progress between LITs for the Expert Reasoning

In Table 2, it is depicted the progress between the two LITs (LIT#7 & LIT#16) regarding the Fusion Engine and WOA Reasoning.  Most progress was in terms of integration with other components.

Component progress between LITs (LIT#7 & LIT#16)
for Fusion Engine and WOA
1st Round of LITs 2nd Round of LITs
Data flow communication between
Fusion Engine’s Data Generator, Kafka,
Persistence Service
Initialization of resources
Storing incoming data in DB Image use case integration ability
Kafka – ERE communication Integration ability with COP, ARRP, BOOTS and BRACELET and ERE
1st intelligence service: Alerting mechanism
on FR health status
2nd intelligence service: zone classification (1st version)
WOA resources management WOA On-site conditions and WOA integration with FE

Table 2: Progress between LITs for Fusion Engine and WOA

Share this post!