TITLE

MARVEL: Multimodal Extreme Scale Data Analytics for Smart Cities Environments

DURATION

36 Months (01.01.2021 - 31.12.2023)

FUNDING PROGRAMME

Horizon 2020

CALL ID

H2020-ICT-2020-1

EU CONTRIBUTION

€5.998.086,25 `{`FBK: €377.500`}`

Handling, processing and delivering data from millions of devices around the world is a complex and remarkable feat that hinges on edge computing systems. While edge computing brings computation and data storage closer, fog computing is what brings analytic services to the edge of the network. It’s an alternative to cloud computing. The EU-funded MARVEL project will develop an Edge-to-Fog-to-Cloud ubiquitous computing framework to enable multimodal perception and intelligence for audio-visual scene recognition, event detection and situational awareness in a Smart City Environment. It will collect, analyse and data-mine multimodal audio-visual streaming data to improve the quality of life and services to citizens within the smart city paradigm, without violating ethical and privacy limits, in an AI-responsible manner. This is achieved via: (i) fusing large scale distributed multi-modal audio-visual data in real-time; (ii) achieving fast time-to-insights; (iii) supporting automated decision making at all levels of the E2F2C stack; and iv) delivering a personalized federated learning approach, where joint multi modal representations and models are co-designed and improved continuously through privacy aware sharing of personalized fog and edge models of all interested parties.

FBK ROLE

FBK is MARVEL’ project partner. FBK is responsible for (a) the consolidation and ingestion extreme-scale audio-visual data-at-rest for batch and training of the models; (b) ensure that streaming data are obtained by the edge devices and processed in almost real-time by the edge nodes and/or cloud servers according to the available resources; (c) operational experiments and trials within the Italian pilot focused on urban security in Trento. It participates with its Digital Society Center (Project Management Group, Deep Visual Learning Lab, E3DA Research Unit, and Speech Technology Lab).

PARTNERS

loghi_partner_81
loghi_partner_89
loghi_partner_16
loghi_partner_70
loghi_partner_91
loghi_partner_1
loghi_partner_61
loghi_partner_99
loghi_partner_113
loghi_partner_124
loghi_partner_18
loghi_partner_112
loghi_partner_93
loghi_partner_84
loghi_partner_136
loghi_partner_114