social linkedin box blue 32
social facebook box blue 32
social twitter box blue 32
social facebook box blue 32

iit-pavis-logo-v3

 3rd PAVIS School

on Computer Vision, Pattern Recognition, and Image Processing

October 2-5, 2012 – Sestri Levante (GE), Italy

 

Component Analysis methods for Human Sensing


--- Application deadline extended to July 15th ---


Enabling computers to understand human behavior has the potential to revolutionize many areas that benefit society such as clinical diagnosis, human computer interaction, and social robotics. A critical element in the design of any behavioral sensing system is to find a good representation of the data for encoding, segmenting, classifying and predicting subtle human behavior. In this tutorial we will review component analysis (CA) techniques (e.g. kernel principal component analysis, support vector machines, spectral clustering) that are commonly used to learn spatial and temporal patterns of human behavior.

The aim of CA is to decompose a signal into interesting components that explicitly or implicitly (e.g. kernel methods) define the representation of the signal. CA techniques are especially appealing because many can be formulated as eigen-problems, offering great potential for efficient learning of linear and non-linear representations of the data without local minima. Although CA methods have been widely used, there is still a need for a better mathematical framework to analyze and extend CA techniques. In the first part of the tutorial we will review existing CA techniques such as PCA, LDA, NMF, ICA,… and standard extensions (e.g., kernel, latent variable models, tensor factorization). In the second part of the tutorial we will show how several extensions of the CA methods outperform state-of-the-art algorithms in problems such as temporal alignment of human behavior, activity recognition, face recognition, facial expression recognition, temporal segmentation/clustering of human activities, joint segmentation and classification of human behavior, and facial feature detection in images.

Applications of automatic measurement and synthesis of facial expression and prosody will include advances in basic research in nonverbal communication, avatars, and biomedical applications in psychiatry (Major Depressive Disorder) and medicine (physical pain).

 

 

Invited Speakers

Fernando De la Torre is Associate Research Professor in the Robotics Institute at Carnegie Mellon University. He  received the BSc degree in telecommunications, as well as the MSc and PhD degrees in electronic engineering from La Salle School of Engineering at Ramon Llull University, Barcelona, Spain in 1994, 1996, and 2002, respectively. His research interests are in the fields of computer vision and machine learning. Specifically, he is interested in modeling and recognizing human behavior, with a focus on understanding human behavior from multimodal sensors (e.g. video, body sensors). He has done extensive work on facial image analysis (e.g., facial expression recognition, facial feature tracking). In machine learning his interest centers on developing efficient and robust methods to model high-dimensional data. Currently, he is directing the Component Analysis Laboratory and the Human Sensing Laboratory at Carnegie Mellon University. He has more than 100 publications in refereed journals and conferences. He has organized and co-organized several workshops and has given tutorials at international conferences on the use and extensions of component analysis.

 

 

multi-sensor-data-fusion
Jeffrey Cohn
Jeffrey Cohn is Professor of Psychology at the University of Pittsburgh and Adjunct Faculty at the Robotics Institute, Carnegie Mellon University. He received his PhD in psychology from the University of Massachusetts at Amherst.  Dr. Cohn has led interdisciplinary and inter-institutional efforts to develop advanced methods of automatic analysis of facial expression and prosody and applied those tools to research in human emotion, interpersonal processes, social development, and psychopathology.  He co-developed influential databases, Cohn-Kanade, MultiPIE, and Pain Archive, co-edited two recent special issues of Image and Vision Computing on facial expression analysis, and co-chaired the 8th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2008).

Ultimo aggiornamento Mercoledì 16 Aprile 2014