Valentina Vasco is a Postdoctoral researcher at the iCub Facility of the Italian Institute of Technology, within the framework of the Joint Lab established with Fondazione Don Carlo Gnocchi. Her research is focused on developing applications of the personal humanoid R1 in the context of motor rehabilitation and elderly care, respectively for clinical and household environments.
She earned a Ph.D. in Bioengineering and Robotics - Advanced and Humanoid Robotics in the same lab, where she had the opportunity to work with the Neuromorphic Systems and Interfaces group, led by Dr. Chiara Bartolozzi, focusing on how to exploit event-driven vision for a robust interaction of the iCub with moving objects. Specifically, she worked on how to separate ego-motion and independent motion with event cameras for the iCub robot.
She earned both Bachelor and Master's degree in Biomedical Engineering at the University of Naples, in 2010 and 2013 respectively. During her Master thesis, she exploited machine learning techniques to investigate the use of the electrocardiographic signal as biometric pattern, opposed to conventional systems (fingerprint, iris, voice etc.).
Event-driven cameras are biologically inspired sensors that asynchronously respond to movements that occur in the sensor's field of view, offering low latency and high temporal resolution (both in the order of micro-seconds). As such, they have great potential for fast and low power vision algorithtms for robotics.
Visual motion estimation is a fundamental requirement for the iCub. In the event-space, the motion of an edge is clearly identifiable as slope of events and current techniques for optical flow calculation identify such structures. However, they are affected by the aperture problem, as only the component of the flow vector normal to the primary axis of orientation of the object can be measured. Corner positions are unaffected by aperture problem, as they can be unambiguously tracked over time.
We propose an adaptation of the commonly used Harris corner detector to the event-based data, that processes asynchronously each event whenever the corner moves by a pixel. While event-based data are motion-dependent, the algorithm robustly detects corners regardless their speed, with an error distribution within 2 pixels.
We achieve a computational cost lower than the frame-based counterpart (of ~94%) and at a detection rate proportional to speed. Therefore tracking is possible event for large displacements, as no information is lost (i.e. between frames).
Despite segmenting a moving target from the background is inherently solved by the sensor when it is stationary, cameras mounted on the robot are typically non stationary, as the robot interacts with the surrounding environment. Methods are therefore required to detect independent motion.
We are currently investigating methods for independent motion segmentation, where flow scene statistics (computed only on corners and thus unaffected by the aperture problem) are learnt as function of the robot’s joint velocities when no independently moving objects are present. This allows us to find independently moving objects by comparing the predicted and the actual motion of their corners.
V. Vasco, A. Glover, and C. Bartolozzi. Fast Event-based Harris Corner Detection Exploiting the Advantages of Event-driven Cameras. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2016), pages 4144–4149, October 2016.
V. Vasco, A. Glover, Y. Tirupachuri, F. Solari, M. Chessa, and Bartolozzi C. Vergence control with a neuromorphic iCub. In IEEE-RAS International Conference on Humanoid Robots (Humanoids 2016), November 2016. In press.