Valentina Vasco is a fellow-PhD at the iCub Facility of the Italian Institute of Technology.
She earned both Bachelor and Master's degree in Biomedical Engineering at the University of Naples, in 2010 and 2013 respectively. During her Master thesis, she exploited machine learning techniques to investigate the use of the electrocardiographic signal as biometric pattern, opposed to conventional systems (fingerprint, iris, voice etc.).
She is currently part of the Neuromorphic Systems and Interfaces group, led by Dr. Chiara Bartolozzi, working on how to exploit event-driven vision for a robust interaction of the iCub with moving objects. Specifically, she has been working on event-based feature detection, in order to extract event-based features unaffected by the aperture problem.
She has also worked on a biological inspired implementation of vergence control for the iCub, based on populations of event-driven Gabor filters, simulating the neural receptive fields of the visual cortex. The results show that a fast and accurate control is achieved, decreasing the latency associated to frame-based cameras, independently on different illumination conditions.
She is currently using the event-based feature detector for motion estimation and independent motion segmentation, in order to remove visual events that appear as a consequence of ego-motion.
Event-driven cameras are biologically inspired sensors that asynchronously respond to movements that occur in the sensor's field of view, offering low latency and high temporal resolution (both in the order of micro-seconds). As such, they have great potential for fast and low power vision algorithtms for robotics.
Visual motion estimation is a fundamental requirement for the iCub. In the event-space, the motion of an edge is clearly identifiable as slope of events and current techniques for optical flow calculation identify such structures. However, they are affected by the aperture problem, as only the component of the flow vector normal to the primary axis of orientation of the object can be measured. Corner positions are unaffected by aperture problem, as they can be unambiguously tracked over time.
We propose an adaptation of the commonly used Harris corner detector to the event-based data, that processes asynchronously each event whenever the corner moves by a pixel. While event-based data are motion-dependent, the algorithm robustly detects corners regardless their speed, with an error distribution within 2 pixels.
We achieve a computational cost lower than the frame-based counterpart (of ~94%) and at a detection rate proportional to speed. Therefore tracking is possible event for large displacements, as no information is lost (i.e. between frames).
Despite segmenting a moving target from the background is inherently solved by the sensor when it is stationary, cameras mounted on the robot are typically non stationary, as the robot interacts with the surrounding environment. Methods are therefore required to detect independent motion.
We are currently investigating methods for independent motion segmentation, where flow scene statistics (computed only on corners and thus unaffected by the aperture problem) are learnt as function of the robot’s joint velocities when no independently moving objects are present. This allows us to find independently moving objects by comparing the predicted and the actual motion of their corners.
V. Vasco, A. Glover, and C. Bartolozzi. Fast Event-based Harris Corner Detection Exploiting the Advantages of Event-driven Cameras. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2016), pages 4144–4149, October 2016.
V. Vasco, A. Glover, Y. Tirupachuri, F. Solari, M. Chessa, and Bartolozzi C. Vergence control with a neuromorphic iCub. In IEEE-RAS International Conference on Humanoid Robots (Humanoids 2016), November 2016. In press.