Back to list

Chiara Bartolozzi

Neuromorphic technology for robotics

Research Line



via Morego 30
+39 010 71781 474


Chiara Bartolozzi is Researcher at the Italian Institute of Technology. She earned a degree in Engineering at University of Genova (Italy) and a Ph.D. in Neuroinformatics at ETH Zurich, developing analog subthreshold circuits for emulating biophysical neuronal properties onto silicon and modelling selective attention on hierarchical multi-chip systems.

She is currently leading the Neuromorphic Systems and Interfaces group, with the aim of applying the "neuromorphic" engineering approach to the design of robotic platforms as enabling technology towards the design of autonomous machines.

This goal is pursued by inducing a paradigm shift in robotics, based on the emerging concept of Event-Driven (ED) sensing and processing. Similarly to their biological counterpart, and differently from traditional robotic sensors, ED sensory systems sample their input signal at fixed (and relative) amplitude changes, intrinsically adapting to the dynamics of the sensory signal: temporal resolution is extremely high for fast transitory signals and decreases for slower inputs.

This approach naturally leads to better robots that acquire, transmit and process information only when needed, optimising the use of resources, leading to real-time, low-cost, operation.


posfet web

Development and incremental improvement of diverse types of ED sensors

We investigate two main sensing modes: touch and vision, with the long term goal of progressively substituting most of the sensors of the iCub with their ED counterpart.In the visual domain, we work on the improvement of pixel functionality, noise resilience and size, and the development of data serialisation, a crucial step towards the integration of higher resolution sensors on the robot. We proposed a novel and more robust circuit for change detection in the visual signal, designed to tackle one of the major drawbacks of change detection, by filtering high frequency noise without low pass limiting the response to large and fast transients.The sparseness of tactile input over space and time calls for ED encoding, where the sensors are not continuously sampled, rather wake-up at stimulation. The iCub is currently equipped with capacitive sensors, at the same time, different groups within IIT are developing new materials and technologies for tactile transducers. This line of research aims at complementing such developments with neuromorphic ED readout circuits for tactile sensing, based on POSFET devices.

Integration of ED sensors and computational platforms on the neuromorphic iCub

The iCub is progressively updated to integrate ED technology. A modular infrastructure, supported by FPGA-based technology, serialisation , and YARP middleware, supports the integration of different ED sensors, neuromorphic computational platforms (SpiNNaker and DYNAP) and software modules for ED sensory processing for seamless integration on the robot. Amongst the latest developments, we implemented a new vision system integrating upgraded ED and frame-based sensors. The low spatial resolution, large field of view and motion sensitive ED sensors coupled with low temporal but high spatial resolution and small field of view frame-based sensors parallels the organisation of the primate's foveated vision. Coarse large field of view periphery can be used to detect salient regions in the scene, that guide sequential saccades that put the region of interest in the high acuity fovea for detailed stimulus processing. To explore ED tactile sensing, we are working on the emulation of ED encoding using the current capacitive sensors integrated on the iCub. Besides the improvement in communication bandwidth thanks to the sensor compression and use of the serial AER protocol, the final goal of this activity is to acquire asynchronous data from different types of sensors (vision and skin at first) and study the use of temporal correlations for multi-sensory integration.


Development of ED sensory processing - the use of time for computing

The development of ED sensing and the relative infrastructure for its integration on the iCub is instrumental to the development of an autonomous robot that exploits efficient sensory compression, enabling fast and low cost acquisition, storage and computation. Our results show that the temporal signature of events from vision sensors adds information about the visual input and that information about the visual stimuli is maximised when it is encoded with a temporal resolution of few milliseconds; this temporal resolution is preserved in higher hierarchical computational layers, improving the separability between objects. The core idea of research in this domain is to exploit this additional temporal information and the high temporal resolution coupled with low data rate for developing methods to process moving stimuli in real time. This, coupled with static precise spatial information from traditional frame-based cameras, will greatly enhance computer vision for robots that have to interact with objects and people in real time, adapting to sudden changes, failures and uncertainties.

Motion segmentation and perception

ED sensing allows the observation of the full trajectory of a moving object, this capability can be exploited to improve the behaviour of robots during manipulation and grasping, as well as for interaction with persons and objects in collaborative tasks. Despite the inherent segmentation of moving objects and the spatio-temporal information available from ED sensory stream, in a robotic scenario, where the robot moves in a cluttered environment, a large number of events arise from the edges in the scene. We are developing methods to robustly select a salient target (using stimulus-driven models of selective attention) and track it with probabilistic filtering in the event-space, as well as methods to compute the motion of objects and discount events due to ego-motion.

Robust speech perception

The fine temporal dynamics of ED vision can be exploited to implement a speech recognition system based on speech production related information (such as movement of the lips, opening, closure, shape, etc....) to improve models of temporal dynamics in speech and compensate for poor acoustic information due to noisy acoustic environments. The temporal features extracted from ED visual signal will be used for the yet unexplored cross-modal ED speech segmentation that will drive processing of speech. To increase the robustness to acoustic noise and atypical speech, acoustic and visual features will be combined to recover phonetic gestures of the inner vocal tract (articulatory features). Visual, acoustic and (recovered) articulatory features will be the observation domain of a novel speech recognition system for the robust recognition of key phrases.

EU Projects


Selected Publications

S. Caviglia, L. Pinna, M. Valle, and C. Bartolozzi. Spike-based readout of posfet tactile sensors. IEEE Transactions on Circuits and Systems I – Regular papers, PP(99):1–11, 2016. PrePrint Available

V. Vasco, A. Glover, Y. Tirupachuri, F. Solari, M. Chessa, and Bartolozzi C. Vergence control with a neuromorphic iCub. In IEEE-RAS International Conference on Humanoid Robots (Humanoids 2016), November 2016. in press.

Indiveri G. Qiao N., Bartolozzi C. Automatic gain control of ultra-low leakage synaptic scaling homeostatic plasticity circuits. In IEEE 2016 Biomedical Circuits and Systems Conference (BioCAS2016), October 2016. in press.

Glover A. and Bartolozzi C. Event-driven ball detection and gaze fixation in clutter. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2016), pages 2203–2208, October 2016.

Vasco V., Glover A., and Bartolozzi C. Fast event-based harris corner detection exploiting the advantages of event-driven cameras. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2016), pages 4144–4149, October 2016.

S. Caviglia, L. Pinna, M. Valle, and C. Bartolozzi. An event-driven posfet taxel for sustained and transient sensing. In Circuits and Systems (ISCAS), 2016 IEEE International Symposium on, pages 349–352, May 2016.

A.D. Rast, A.B. Stokes, S. Davies, S.V. Adams, H. Akolkar, D.R. Lester, C. Bartolozzi, A. Cangelosi, and S. Furber. Transport-independent protocols for universal aer communications. In Sabri Arik, Tingwen Huang, Kin Weng Lai, and Qingshan Liu, editors, Neural Information Pro- cessing: 22nd International Conference, ICONIP 2015, November 9-12, 2015, Proceedings, Part IV, pages 675–684, Cham, 2015. Springer International Publishing

C.G. Mayr, S. Sheik, C. Bartolozzi, E. Chicca. Editorial: Synaptic plasticity for neuromorphic systems. Frontiers in Neuroscience, 10:214, 2016.

F. Boi, T. Moraitis, V. De Feo, F. Diotalevi, C. Bartolozzi, G. Indiveri, and A. Vato. A bidirectional brain-machine interface featuring a neuromorphic hardware decoder. Frontiers in Neuroscience, 10:563, 2016.

C. Bartolozzi, L. Natale, F. Nori, and G. Metta. Robots with a sense of touch. Nature Materials, 15(9):921–925, 09 2016.

H. Akolkar, C. Meyer, O. Marre, C. Bartolozzi, S. Panzeri, X. Clady, and R.B. Benosman. What can neuromorphic event-driven precise timing add to spike-based pattern recognition? Neural Computation, 27(3):561 – 593, May 2015.

E. Chicca, F. Stefanini, C. Bartolozzi, and G. Indiveri. Neuromorphic electronic circuits for building autonomous cognitive systems. Proceedings of the IEEE, 102(9):1367–1388, Sept 2014. 



I numeri di IIT

L’Istituto Italiano di Tecnologia (IIT) è una fondazione di diritto privato - cfr. determinazione Corte dei Conti 23/2015 “IIT è una fondazione da inquadrare fra gli organismi di diritto pubblico con la scelta di un modello di organizzazione di diritto privato per rispondere all’esigenza di assicurare procedure più snelle nella selezione non solo nell’ambito nazionale dei collaboratori, scienziati e ricercatori ”.

IIT è sotto la vigilanza del Ministero dell'Istruzione, dell'Università e della Ricerca e del Ministero dell'Economia e delle Finanze ed è stato istituito con la Legge 326/2003. La Fondazione ha l'obiettivo di promuovere l'eccellenza nella ricerca di base e in quella applicata e di favorire lo sviluppo del sistema economico nazionale. La costruzione dei laboratori iniziata nel 2006 si è conclusa nel 2009.

Lo staff complessivo di IIT conta circa 1440 persone. L’area scientifica è rappresentata da circa l’85% del personale. Il 45% dei ricercatori proviene dall’estero: di questi, il 29% è costituito da stranieri provenienti da oltre 50 Paesi e il 16% da italiani rientrati. Oggi il personale scientifico è composto da circa 60 principal investigators, circa 110 ricercatori e tecnologi di staff, circa 350 post doc, circa 500 studenti di dottorato e borsisti, circa 130 tecnici. Oltre 330 posti su 1400 creati su fondi esterni. Età media 34 anni. 41% donne / 59 % uomini.

Nel 2015 IIT ha ricevuto finanziamenti pubblici per circa 96 milioni di euro (80% del budget), conseguendo fondi esterni per 22 milioni di euro (20% budget) provenienti da 18 progetti europei17 finanziamenti da istituzioni nazionali e internazionali, circa 60 progetti industriali

La produzione di IIT ad oggi vanta circa 6990 pubblicazioni, oltre 130 finanziamenti Europei e 11 ERC, più di 350 domande di brevetto attive, oltre 12 start up costituite e altrettante in fase di lancio. Dal 2009 l’attività scientifica è stata ulteriormente rafforzata con la creazione di dieci centri di ricerca nel territorio nazionale (a Torino, Milano, Trento, Parma, Roma, Pisa, Napoli, Lecce, Ferrara) e internazionale (MIT ed Harvard negli USA) che, unitamente al Laboratorio Centrale di Genova, sviluppano i programmi di ricerca del piano scientifico 2015-2017.


IIT: the numbers

Istituto Italiano di Tecnologia (IIT) is a public research institute that adopts the organizational model of a private law foundation. IIT is overseen by Ministero dell'Istruzione, dell'Università e della Ricerca and Ministero dell'Economia e delle Finanze (the Italian Ministries of Education, Economy and Finance).  The Institute was set up according to Italian law 326/2003 with the objective of promoting excellence in basic and applied research andfostering Italy’s economic development. Construction of the Laboratories started in 2006 and finished in 2009.

IIT has an overall staff of about 1,440 people. The scientific staff covers about 85% of the total. Out of 45% of researchers coming from abroad 29% are foreigners coming from more than 50 countries and 16% are returned Italians. The scientific staff currently consists of approximately 60 Principal Investigators110 researchers and technologists350 post-docs and 500 PhD students and grant holders and 130 technicians. External funding has allowed the creation of more than 330 positions . The average age is 34 and the gender balance proportion  is 41% female against 59% male.

In 2015 IIT received 96 million euros in public funding (accounting for 80% of its budget) and obtained 22 million euros in external funding (accounting for 20% of its budget). External funding comes from 18 European Projects, other 17 national and international competitive projects and approximately 60 industrial projects.

So far IIT accounts for: about 6990 publications, more than 130 European grants and 11 ERC grants, more than 350 patents or patent applications12 up start-ups and as many  which are about to be launched. The Institute’s scientific activity has been further strengthened since 2009 with the establishment of 11 research nodes throughout Italy (Torino, Milano, Trento, Parma, Roma, Pisa, Napoli, Lecce, Ferrara) and abroad (MIT and Harvard University, USA), which, along with the Genoa-based Central Lab, implement the research programs included in the 2015-2017 Strategic Plan.


IIT's website uses the following types of cookies: browsing/session, analytics, functional and third party cookies. Users can choose whether or not to accept the use of cookies and access the website.
By clicking on further information, the full information notice on the types of cookies used will be displayed and you will be able to choose whether or not to accept cookies whilst browsing on the website.
Further information
Accept and close