ETHZ Dataset for Appearance-Based Modeling
The experimental results for the paper Learning Discriminative Appearance-Based Models Using Partial Least Squares in SIBGRAPI’2009 (link to the research page) were obtained using the ETHZ dataset, which provides a large number of different people captured in uncontrolled conditions. The video sequences are captured from moving cameras, which provides a range of variations in people’s appearances.
We used the ground truth location of people in the video to crop each person, then we created a directory containing samples of each person (p0?? – p0??) for each video sequence. The samples in the directories have the original size, but in our experiments they were resized to 32×64 pixels. In our experiments, we chose one of the samples of each person to learn the appearance-based model and the remaining samples for classification (this procedure was repeated few times and the average was used). The results were given by the overall recognition rate. Next figure shows a few examples of cropped samples contained in the first video sequence of the dataset.
Cropped samples used from all three sequences: [zip file (146 MB)]
You should cite the following paper if you use this dataset in your work.
Object-based Temporal Segment Relational Network for Activity Recognition Inproceedings
In: Conference on Graphic, Patterns and Images (SIBGRAPI), pp. 1-8, 2018.
In: VISAPP 2018 - International Conference on Computer Vision Theory and Applications, pp. 1-8, 2018.
In: Iberoamerican Congress on Pattern Recognition (CIARP 2017), pp. 77-85, 2017.
In: Conference on Graphics, Patterns and Images (SIBGRAPI), pp. 1-8, 2017.
In: IAPR International Conference on Pattern Recognition (ICPR), pp. 1-6, 2016.