In the last few years, the diffusion of new imaging modalities has improved the reliability of the automated surveillance systems. In particular, far infrared or thermal imaging is able to efficiently cope with working conditions that limit the use of visible imaging devices, such as night-time or adverse weather. Moreover, thermal imaging is less affected by lighting conditions and provides enhanced contrast between human bodies and their environment. The most widespread approaches for automatic pedestrian detection, devised for applications as automated surveillance in public places or drive assistance, rely on single modality images, either using the visible spectrum or using another modality such as near-infrared or far-infrared technologies. However, as thermal and visible imaging bring complementary information of the same scene, their combination, or multi-modality fusion, can achieve an increased robustness in the detection task, allowing inferences that cannot be obtained from a single sensor or source, or whose quality exceeds that of an inference drawn from any single source.
PAVIS team is committed toward the development of innovative strategies aimed at fusing thermal and visible images, notably in the context of pedestrian detection. In particular powerful descriptors encoding the relations among thermal and visible images, such as covariance matrices or mutual information matrices, as well as advanced machine learning approaches like multiple kernel support vector machines are investigated. Obtained performance on publicly available databases outclass mono-modal detection strategies as well as a set of standard fusion policies.
- M. San Biagio, M. Crocco, M. Cristani, S. Martelli, V. Murino
"Low-level Multimodal Integration on Riemannian Manifolds for Automatic Pedestrian Detection"
15th International Conference on Information Fusion (FUSION), 2012, Singapore