While accuracy and speed have for a long time been top of the agenda for robot design and control, the development of new actuators and control architectures is now bringing a new focus on passive and active compliance, energy optimization, human-robot collaboration, easy-to-use interfaces and safety.
The machine learning tools that have been developed for precise reproduction of reference trajectories need to be re-thought and adapted to these new challenges. For planning, storing, controlling, predicting or re-using motion data, the encoding of a robot skill goes beyond its representation as a single reference trajectory that needs to be tracked or set of points that needs to be reached. Instead, other sources of information need to be considered, such as the local variation and correlation in the movement. Also, most of the machine learning tools developed so far are decomposed into an offline model estimation phase and a retrieval/regression phase. Instead, learning in compliant robots should view demonstration and reproduction as an interlaced process that can combine both imitation and reinforcement learning strategies to incrementally refine the task.
The development of compliant robots brings new challenges in machine learning and physical human-robot interaction, by extending the skill transfer problem towards tasks involving force information, and towards systems capable of learning how to cope with various sources of perturbation introduced by the user and the task. We take the perspective that both the redundancy of the robot architecture AND the task can be exploited to adapt a learned movement to new situations, while at the same time improving safety and energy consumption. Through these new physical guidance capabilities, the robot becomes a tangible interface that can exploit the natural teaching tendency of the user (scaffolding, kinesthetic teaching, exaggeration of movements to highlight the relevant features, etc.).
Along with the learning aspects, such perspective also emphasizes the development of new interfaces capable of visualizing the learned skill in an interactive manner, in order for the user to assess the robot''s progress, as well as estimating its current generalization capabilities and understanding of the task.
Toward these goals, the Learning and Interaction Group explores the following issues:
- Robust and compact representation of movements by the superposition of basis flow fields.
- Incremental learning of tasks by combining imitation and exploration strategies in a probabilistic framework.
- Safety embedded in the teaching mechanism by exploiting task redundancy and compliant control.
- Development of EM-based Reinforcement Learning strategies that can cope with real-world exploration trials, such as the use of multidimensional rewards and multi-resolution policies.
- User interfaces for the assessment of skills acquisition through active sensing and interactive data visualization.
The long-term view is to develop flexible learning tools that will anticipate the ongoing raise of compliant actuators technologies. In particular, we would like to ensure a smooth transition to passive compliant actuators and manipulators that can be safely used in the proximity of users, by considering physical contact and collaborative interaction as key elements in the transfer of skills.
- Petar Kormushev (Researcher / Team Leader)
- Sylvain Calinon (External Collaborator)
- Danilo Bruno (Post Doc)
- Nawid Jamali (Post Doc)
- Leonel Rozo (Post Doc)
- Rodrigo Jamisola (Post Doc)
- Przemyslaw Kryczka (Post Doc)
- Reza Ahmadzadeh (PhD student)
- Milad Malekzadeh (PhD student)
- João Silvério (PhD student)
- Affan Pervez (MSc student, KTH, Sweden)
- Arnau Carrera (PhD student, Universitat de Girona, Spain)
- Przemyslaw Kryczka (PhD student, Waseda University, Japan)
- Ahmed Wafik Amin (MSc student, EMARO)
- Amir Santos (MSc student, Mexico)
- Matthijs Jansen (MSc student, TU Delft, Netherlands)
- Matteo Leonetti (Post Doc)
- Tohid Alizadeh (PhD graduate)
- Davide De Tommaso (PhD graduate)
- Antonio Pistillo (PhD graduate)
Rodrigo S. Jamisola, Petar Kormushev, Antonio Bicchi, and Darwin G. Caldwell, "Haptic Exploration of Unknown Surfaces with Discontinuities", In Proc. IEEE/RSJ Intl Conf. on Intelligent Robots and Systems (IROS 2014), 2014. [details]
Petar Kormushev, Sylvain Calinon and Darwin G. Caldwell, "Reinforcement Learning in Robotics: Applications and Real-World Challenges", MDPI Journal of Robotics (ISSN 2218-6581), Special Issue on Intelligent Robots, vol.2, pp.122-148, 2013. [pdf] [bibtex]
Sylvain Calinon, Petar Kormushev, Darwin G. Caldwell. "Compliant skills acquisition and multi-optima policy search with EM-based reinforcement learning", Robotics and Autonomous Systems, Volume 61, Issue 4, pp. 369-379, 2013. [pdf] [bibtex]
Petar Kormushev and Darwin G. Caldwell. "Improving the Energy Efficiency of Autonomous Underwater Vehicles by Learning to Model Disturbances", Proc. IEEE/RSJ Intl Conf. on Intelligent Robots and Systems (IROS 2013), Tokyo, Japan, 2013. [pdf] [bibtex]
Seyed Reza Ahmadzadeh, Petar Kormushev and Darwin G. Caldwell, "Visuospatial Skill Learning for Object Reconfiguration Tasks", Proc. IEEE/RSJ Intl Conf. on Intelligent Robots and Systems (IROS 2013), Tokyo, Japan, 2013. [pdf] [bibtex]
Kormushev P., Calinon, S. and Caldwell, D.G., " Imitation learning of positional and force skills demonstrated via kinesthetic teaching and haptic input", Advanced Robotics, Vol. 25, pp. 581-603, 2011. [pdf][bibtex]
Calinon, S., D''halluin, F., Sauser, E.L., Caldwell, D.G. and Billard, A.G. (2010). "A probabilistic approach based on dynamical systems to learn and reproduce gestures by imitation". IEEE Robotics and Automation Magazine [pdf][bibtex]
Calinon, S., Pistillo, A. and Caldwell, D.G. (2011). " Encoding the time and space constraints of a task in explicit-duration Hidden Markov Model". In Proceedings of the IEEE/RSJ Intl Conference on Intelligent Robots and Systems (IROS).
Kormushev P., Ugurlu, B., Calinon, S., Tsagarakis, N. and Caldwell, D.G. (2011). " Bipedal Walking Energy Minimization by Reinforcement Learning with Evolving Policy Parameterization". In Proceedings of the IEEE/RSJ Intl Conference on Intelligent Robots and Systems (IROS). [bibtex]Pistillo, A., Calinon, S. and Caldwell, D.G. (2011). " Bilateral Physical Interaction with a Robot Manipulator through a Weighted Combination of Flow Fields". In Proceedings of the IEEE/RSJ Intl Conference on Intelligent Robots and Systems (IROS).
Kormushev P., Nenchev, D.N., Calinon, S., and Caldwell, D.G., " Upper-body Kinesthetic Teaching of a Free-standing Humanoid Robot", IEEE Intl. Conf. on Robotics and Automation (ICRA 2011), 2011. [pdf][bibtex]
Calinon, S., Sardellitti, I. and Caldwell, D.G. (2010). "Learning-based control strategy for safe human-robot interaction exploiting task and robot redundancies". In Proceedings of the IEEE/RSJ Intl Conf. on Intelligent Robots and Systems (IROS). [pdf][bibtex]
Kormushev P., Calinon, S. and Caldwell, D.G. " Robot Motor Skill Coordination with EM-based Reinforcement Learning", Proc. IEEE/RSJ Intl Conf. on Intelligent Robots and Systems (IROS-2010), 2010. [pdf][bibtex]
Calinon, S., Sauser, E.L., Billard, A.G. and Caldwell, D.G. (2010). "Evaluation of a probabilistic approach to learn and reproduce gestures by imitation". In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Anchorage, Alaska, USA. [pdf][bibtex]
Kormushev P., Calinon, S., Saegusa, R. and Metta, G., " Learning the skill of archery by a humanoid robot iCub", Proc. IEEE Intl Conf. on Humanoid Robots (Humanoids-2010), 2010. [pdf][bibtex]
Calinon, S., D''halluin, F., Caldwell, D.G. and Billard, A. (2009). "Handling of multiple constraints and motion alternatives in a robot programming by demonstration framework". In Proceedings of the IEEE-RAS International Conference on Humanoid Robots (Humanoids), Paris, France. [pdf][bibtex]