In the hearing sense, it’s important not only to recognize what are you listening at that moment, but also from which position in the space the sound is being produced. This skill is useful for many situations, such as predators’ localization or risks avoidance. Moreover, the auditory attention mechanisms need this feature for better understanding the speech. Auditory information is often used as support for the visual information in such a way that, if the object of interest is not in the field of view of the robot, the object is first localized by the sound. Then, when the object is in the field of view of the robot, a sensory integration process is carried out, where both visual and auditory information are combined for better identified and recognize the object.
Based on the humans’ inner hear and the primary auditory brainstem, a neuromorphic, event-based, digital model of the human cochlea is being used within the iCub for providing to it the sense of hearing. In addition, this model can extract binaural cues for performing the sound source localization task in real-time, thus allowing the iCub to orientate its head towards the sound source.
Neuromorphic Auditory Sensor, Spiking Neural Networks, biologically inspired models of sound source localization (interaural time difference, interaural amplitude difference).