IIT@MIT is the result of a collaborative agreement between the Istituto Italiano di Tecnologia and the Massachusetts Institute of Technology. Our aim is to advance the frontiers of learning theory and machine learning, while building algorithmic tools for the analysis of complex systems and high dimensional data. Our approach blends well established concepts and methods from computer science, statistics and signal processing with modern results in (high dimensional) probability, into new computational tools for statistical inference. Our goal is to channel theory and algorithms into new applications including smarter technologies and sophisticated engines for inference from high-dimensional data/signals. The ultimate objective of the lab is a future generation of intelligent technologies.
Robotics is a natural testbed for machine learning solutions. The variety of sensory modalities robots are endowed with requires for the robot learning to adapt and interact with the environment and humans. We develop and apply cutting edge machine learning techniques to solve perception, cognition and control problems in humanoid robotics.
In recent years, statistical machine learning approaches had a huge impact in the solution of computer vision problems such as object classification. We develop novel algorithms for image representation that could reduce the need of labeled data and scalable machine learning solutions to classify large image and video datasets.
The dynamic and highly variable nature of speech signals provide a challenging setting for machine learning methods. Automatic speech recognition is still largely unsolved, despite recent advances through deep learning representations. The goal of this project is to derive invariant speech representations for recognition and classification.
The research efforts of IIT@MIT are organized in the following main projects:
It is well know that learning how to solve complex tasks becomes much simpler once the right features are found. Yet, feature (representation) learning is largely an open problem and one of the main challenges in machine learning. We tackle this problem considering two complementary data representation principles, namely invariance and selectivity and efficient coding.
Invariance and Selectivity
The key idea is that useful representations should be invariant to semantically irrelevant transformation while sufficiently discriminative (selective). Following this principle, we develop deep learning algorithms inspired by current neuroscience models of the information processing in the visual cortex.
While a good data representation should ultimately reduce the need for supervision, in practice unsupervised approaches to data representation rely on reconstruction as a general starting requirement. An efficiency requirement needs to be further specified to make the approach sound. Here efficiency means looking for forms of parsimonious reconstruction.
The availability of large scale datasets requires the development of ever more efficient machine learning procedures. A key feature towards scalability is tailor computational requirements to the generalization properties in the data, rather than their raw amount. This project aims at blending statistical and optimization principles to design new, sound and scalable learning machines.
Classical machine learning, based on estimating multivariate function with scalar outputs, might not be adequate to cope with a variety of structured data with no natural vectorial representation. Leveraging on analytic and optimization techniques, we develop novel modeling principles and corresponding implementations to deal with structured learning problems and in particular transfer learning problems.
- iCub Facility (IIT)
- Robotics Brain and Cognitive Sciences (IIT)
- Neuroscience and Brain Technologies (IIT)
- Massachusetts Institute of Technology, Center for Brains, Minds and Machine
- Department of Informatics, Bioengineering, Robotics and Systems Engineering, Università degli studi di Genova
- Google DeepMind
- Duke University