Back to list

Vadim Tikhanoff

Technologist
Technologist

Facility

iCub Tech

Contacts

+39 010 2898 243
Contact Me

About

Dr. Vadim Tikhanoff currently holds a Scientist/Technologist position at the IIT working in the iCub Facility. He possesses a solid background in human-machine interaction and extensive experience in real time computer vision, machine learning technologies and developing robust state-of-the-art software solutions and architectures. He graduated in Computer Science at the University of Essex, UK (2004), presenting a study based on Artificial Intelligence and in particular Multi-Agent Systems. In 2005 he accomplished his MSc in Interactive Intelligent Systems and attained his PhD (2009) on the Development of Cognitive Capabilities in Humanoid Robots, at the University of Plymouth, UK, within the Adaptive Behaviour and Cognition Research Lab. He was postdoctoral researcher at the IIT in the Robotics, Brain and Cognitive Sciences laboratory and then the iCub Facility. Vadim has authored numerous papers published in renown international conferences as well as international journals and book chapters, in areas ranging across neural networks, language acquisition, cognitive systems and image processing. He has participated in several EU-funded projects including iTALK, eMorph, Xperience, POETICON and POETICON++. He served as Program Chair and program committee of various conferences and was guest Associate Editor of the Humanoid Robotics code topic of the Frontiers in Robotics and AI.

Projects

In general terms, my background falls practically at the intersection of the fields of Robotic Systems, in particular humanoid robots, and Artificial Intelligence. My main research focuses on the development of innovative techniques and approaches for the design of skills in a robot, in order to interact with the surrounding physical world and manipulate objects in an adaptive, productive and efficient manner. In particular develop a cognitive architecture which would allow a humanoid robot to develop improved exploratory visual competencies that are associated with refined motor controls and conversational interaction. More specifically, my research has been focusing on fully instantiated systems integrating perception and learning, capable of interacting and communicating in the virtual and the real world and perform goal directed tasks. Such an architecture will allow the robot to effectively work in unstructured environments, cooperate and assist humans in their daily chores.
In the past few years my research has focused on the iCub platform basing it on the above concepts in order to maximise the impact of my work to the scientific community. My main contributions can be divided in 3 main streams such as computer vision (Perception), Cognitive Architectures (Learning effects of actions) and Software Engineering and System Integration

Selected Publications

Selected publications:
  • Kompatsiari, K., Ciardo, F., Tikhanoff, V. et al. Int J of Soc Robotics (2019). https://doi.org/10.1007/s12369-019-00565-4
  • Kompatsiari K, Ciardo F, Tikhanoff V, Metta G, Wykowska A (2018) On the role of eye contact in gaze cueing. Sci Rep 8(1):17842
  • Kompatsiari K., Tikhanoff V., Ciardo F., et al. 2017. The Importance of Mutual Gaze in Human- Robot Interaction. In Intelligent Virtual Agents Brinkman W.-P., Broekens J., & Heylen D., Eds. 443–452. Cham: Springer International Publishing
  • Fantacci C., Pattacini U., Tikhanoff V., Natale L. (2017) Markerless visual servoing for humanoid robot platformsIEEE-RAS International Conference on Humanoid Robots
  • Fantacci C., Pattacini U., Tikhanoff V., Natale L. (2017) Visual end-effector tracking using a 3D model-aided particle filter for humanoid robot platforms IEEE/RSJ International Conference on Intelligent Robots and Systems
  • Fantacci CVezzani G, Pattacini U, Tikhanoff V, Natale L (2017)Markerless visual servoing on unknown objects for humanoid robot platforms arXiv preprint arXiv:1710.04465
  • Kompatsiari K., Tikhanoff V., Ciardo F., Metta G., Wykowska A. (2017) The Importance of Mutual Gaze in Human-Robot Interaction. In: Kheddar A. et al. (eds) Social Robotics. ICSR 2017. Lecture Notes in Computer Science, vol 10652, Springer, 443-452. DOI: doi.org/10.1007/978-3-319-70022-9_44
  • Mar T, Tikhanoff V, Natale L (2017) What can I do with this tool? Self-supervised learning of tool affordances from their 3D geometry.IEEE Transactions on Cognitive and Developmental Systems
  • Mar T., Tikhanoff V, Metta G, Natale L (2017) Self-Supervised Learning of Tool Affordances from 3D Tool Representation through Parallel SOM Mapping IEEE International Conference on Robotics and Automation, Singapore, 2017
  • Mar T., Tikhanoff V., Metta G., Natale L. (2015), "Self-supervised learning of grasp dependent tool affordances on the iCub Humanoid robot", ICRA IEEE International Conference on Robotics and Automation, Seattle.
  • Fannello S, Pattacini, U., Gori, I., Tikhanoff, V. Randazzo, M., Roncone, A., Odone, F. Metta, G. (2014) 3D Stereo Estimation and Fully Automated Learning of Eye-Hand Coordination in Humanoid Robots. IEEE-RAS International Conference on Humanoid Robots 
  • Gori, I. Pattacini, U., Tikhanoff, V, Metta, G. (2014) Three-Finger Precision Grasp on Incomplete 3D Point Clouds. IEEE International Conference on Robotics and Automation.
  • Tikhanoff, V., Pattacini U., Natale, L. and Metta, G. (2013) Exploring affordances and tool use on the iCub. IEEE-RAS International Conference on Humanoid Robots. 
  • Browatzki, B.; Tikhanoff V.; Metta G.; Bulthoff, H.; Wallraven, C. (2012). Active Object Recognition on a Humanoid Robot. IEEE International Conference on Robotics and Automation IROS 2012 St. Paul Minnesota USA May 14-18 (Best paper award nomination)
  • Tikhanoff V., Cangelosi A., Metta G. (2011). Language understanding in humanoid robots: iCub simulation experiments. IEEE Transactions on Autonomous Mental Development. 3(1), 17-29 
  • Borisyuk R., Kazanovich Y., Chik D., Tikhanoff V.& Cangelosi A. (2009) A neural model of selective attention and object segmentation in the visual scene: An approach based on partial synchronization and star-like architecture of connections. Neural Networks
  • Courtney P., Michel O., Cangelosi A., Tikhanoff V., Metta G., Natale L., Nori F., & Kernbach S. 2009. Cognitive systems platforms using open source. In: R. Madhavan, E. Tunstel & E. Messina. Performance Evaluation and Benchmarking of Intelligent Systems, Springer.
  • Tikhanoff V., Cangelosi, A., P. Fitzpatrick, G. Metta, L. Natale, and F. Nori, “An open-source simulator for cognitive robotics research: The prototype of the iCub humanoid robot simulator,” in Proc. IEEE Work- shop Perform. Metrics Intell. Syst., Washington, D. C., 2008.
  • Cangelosi A, Tikhanoff V, Fontanari J, Hourdakis E. Integrating Language and Cognition: A Cognitive Robotics Approach. IEEE Computational Intelligence Magazine August 2007
  • Tikhanoff V, Cangelosi A, Fontanari J, Perlovsky L. Scaling up of action repertoire in linguistic cognitive agents. KIMAS 07 Integration of Knowledge Intensive Multi- Agent Systems. Waltham April 2007
  • Tikhanoff V, Fontanari J, Cangelosi A, Perlovsky L. Language and Cognition Integration through Modeling Field Theory: Simulations on category formation for symbol grounding. ICANN 06 International Conference on Artificial Neural Networks Athens, September 2006
  • Cangelosi A, Tikhanoff V. Integrating Action and Language in Cognitive Robots: Experiments with Modeling Field Theory. Fusion 2006 Conference Satellite EOARD workshop Firenze July 2006
  • Cangelosi A, Hourdakis E, Tikhanoff V. Language acquisition and symbol grounding transfer with neural networks and cognitive robots. Proceedings of IJCNN2006. International Joint Conference on Neural Networks Vancouver July 200

INFORMATIVA SUI COOKIES

Il sito web di IIT utilizza i seguenti tipi di cookies: di navigazione/sessione, analytics, di funzionalità e di terze parti. L'utente può scegliere di acconsentire all'uso dei cookies e accedere al sito. Facendo click su "Maggiori Informazioni" verrà visualizzata l'informativa estesa sui tipi di cookies utilizzati e sarà possibile scegliere se autorizzarli durante la navigazione sul sito.
Maggiori Informazioni
Accetta e chiudi