We live in a world where vast amounts of data are being generated by a growing and varied number of sources in challenging real-world domains, and it is increasingly becoming necessary to employ methods from computational statistics and machine learning to infer structure and extract patterns from data.

Often the assumptions underlying this structure have a simple and intuitive interpretation, but developing efficient computational algorithms is challenging and involves tradeoffs between statistical guarantees and computational constraints involving large datasets.

Machine learning is also a driving force behind recent efforts to build general artificial intelligence systems, where deep learning methods are achieving state of the art results in a number of data driven disciplines (computer vision, speech recognition, etc.). This poses new challenges to theoreticians with many important questions yet to be answered.

Our research in machine learning draws inspiration from different disciplines of mathematics and statistics: approximation theory, empirical processes, numerical optimization, and statistical learning theory. Specific areas of interest include kernel methods, multitask and transfer learning, online learning, reinforcement learning and sparsity regularization. While the primary focus of our work is on theory and algorithms, our research has a broad multidisciplinary component with applications spanning computer vision, bioinformatics and user modelling, among others.

Collaborations

  • SUNY Albany (Charles Micchelli)
  • INSEAD (Theodoros Evgeniou)
  • University College London (Nadia Bethouze, David Jones, Mark Herbster)
  • ENSAE Paris Tech (Alexandre Tsybakov)
  • Cecilia Mascolo (Cambridge)
  • Oxford (Raphael Hauser)

Publications

  1. A. Maurer, M. Pontil, B. Romera-Paredes. The benefit of multitask representation learning. J. Machine Learning Research (forthcoming).
  2. M. Herbster, S. Pasteris, M. Pontil. Predicting a Switching Sequence of Graph Labelings, 16:2003−2022, 2015.
  3. C.A. Micchelli, J.M. Morales, M. Pontil. Regularizers for structured sparsity. Advances in Computational Mathematics. 38(3):455-489, 2013.
  4. A. Maurer and M. Pontil. K-dimensional coding schemes in Hilbert spaces. IEEE Transactions on Information Theory, 56(11):5839-5846, 2010.
  5. K. Lounici, A.B. Tsybakov and S. van de Geer. Taking advantage of sparsity in multi-task learning. COLT 2009.
  6. A. Argyriou, T. Evgeniou, M. Pontil. Convex multitask feature learning. Machine Learning 73(3):243-272, 2008.