Multi-sensor data fusion is an emerging research field whose aim is to combine information from multiple and diverse sources (e.g. different sensors – thermal and visible spectrum cameras, laser, range sensors, microphones, RFID etc.) to achieve inferences that cannot be obtained from a single sensor or source, or whose quality exceeds that of an inference drawn from any single source. To cite a few examples: person identification can be improved through a combination of audio (voice) and video (silhouette) cues, or object tracking in adverse weather conditions can take advantage from the fusion of thermal and visible camera images.
Multi-sensor data fusion is inherently a multi-disciplinary subject that draws from such areas as statistical estimation, signal processing, computer vision and machine learning.
PAVIS is concerned with the development of multi-sensor data fusion techniques mainly for automated surveillance applications. In this context, different tasks such as person detection, tracking and re-identification, behavior analysis, and high level scene understanding are addressed, with the aim to investigate potential improvements with a multi-sensor set up through a combination of theoretical analysis and experimental testing.