EW - iCub explores the world

iCub explores the world

Event-driven sensors generate information only with movement, static stimuli can only be perceived upon exploratory actions. In vision, we transfer visual attention models to their event-driven, spiking, counterpart, to equip iCub with a first low-latency gateway to select relevant regions for saccades and computationally intensive inspection. For the next object recognition, we explore gradient-based local learning, with the goal of implementing a fully spiking pipeline on neuromorphic hardware. Tactile exploration will follow the exploratory procedures of humans, guided by event-driven proprioception and tactile information, using unique neuromorphic multi-modal tactile sensors.

Methods: biologically inspired models of vision (attention, depth, and motion perception) and touch (exploratory procedures, hardware emulation of human glabrous skin tactile afferents), implemented using Spiking Neural Networks and spike-driven learning.

MP - iCub interacts with people

iCub interacts with people

Robots need to be aware of human presence and actions for safety and to engage in collaborative actions. We want to leverage on the low-latency and high temporal resolution of event-cameras to detect and track human beings and infer their actions online. We couple auditory and visual perception to detect and localize speech, towards equipping the robot with the capability of selecting one person and following its speech.

Methods: Methods: event-driven ML, implemented in SNN, event-driven vision (motion estimation and tracking)

DI - iCub interacts with a dynamical world

iCub interacts with a dynamical world

Perceiving the motion of a target is essential for a successful interaction with a dynamic environment, in which objects and the robot itself move simultaneously. Event-cameras allow to track fast-moving targets, without losing information “between frames,” as a moving object triggers events from all pixels along its spatio-temporal trajectory.

We develop tracking algorithms robust to event-clutter due to ego-motion and prediction algorithms that anticipate the trajectory of the target and give sufficient decisional time to the robot to perform an adequate action. We explore data representations, algorithms and coding that guarantee real-time and low-latency processing.