Collaborating partners
- TimTec Defense
Funding
- ESA (SIR5_22)
Scope
The goal of the project is to investigate the relationship between two important paradigms in Machine Learning driven Earth Observation. The first is sensor fusion - combining data from multiple sensors when building ML prediction models. The second one is self-supervised learning, an exciting research direction in deep learning research that enables the usage of large quantities of unlabelled data to construct representation extraction models.
These models are used to extract representations or features that have been demonstrated to be useful for various downstream tasks or applications with little or no training and are suitable when limited labelled training data is available. Both paradigms are known in EO applications; Sensor fusion is frequently used to mitigate the limitations of individual sensors, and the interest in SSL and representation learning has recently increased due to the availability of unlabelled data in EO and the increasing number of potential applications which have access to a limited amount of labelled data for training ML models. We argue that despite the current interest, there are still many open questions stemming from the specifics of EO data and the evolutionary nature of sensors.
Our project will focus on the role of self-supervised deep learning for sensor fusion from the perspective of different sources with different spatial resolutions and spectral coverage. We will investigate how representations from different sensors can be used together efficiently without training a new representation extraction model from scratch on every setup change. To study will be grounded in a real-world scenario requiring multispectral satellite data and high-resolution UAS images. We are collaborating with SENG as an end-user that requires hydrological analysis and prediction for the Soča river which we plan to support with our flexible EO pipeline.