The proposed dataset is composed by four different groups of data collected using the Kinect. The first group of data has been obtained by recording 79 people with a frontal view, walking slowly, avoiding occlusions and with stretched arms ("Collaborative"). This happened in an indoor scenario, where the people were at least 2 meters away from the camera. The second ("Walking1") and third ("Walking2") groups of data are composed by frontal recordings of the same 79 people walking normally while entering the lab where they normally work. The fourth group ("Backwards") is a back view recording of the people walking away from the lab. Since all the acquisitions have been performed in different days, there is no guarantee that visual aspects like clothing or accessories will be kept constant. Moreover, we asked some people to dress the same t-shirt in "Walking2". This is useful to highlight the power of RGB-D re-identification compared with standard appearance-based methods.
We provide 5 synchronized information for each person: 1) a set of 5 RGB images, 2) the foreground masks, 3) the skeletons, 4) the 3d mesh (ply), 5) the estimated floor. We also provide a MATLAB script to read the data. Since the data are in standard formats (images, text and ply files) you can easily implement your own parser using your favourite programming language.
Instrunctions are in README.txt.