PT Journal AU Jiaolong Xu David Vazquez Antonio Lopez Javier Marin Daniel Ponsa TI Learning a Part-based Pedestrian Detector in Virtual World SO IEEE Transactions on Intelligent Transportation Systems JI TITS PY 2014 BP 2121 EP 2131 VL 15 IS 5 DI 10.1109/TITS.2014.2310138 DE Domain Adaptation; Pedestrian Detection; Virtual Worlds AB Detecting pedestrians with on-board vision systems is of paramount interest for assisting drivers to prevent vehicle-to-pedestrian accidents. The core of a pedestrian detector is its classification module, which aims at deciding if a given image window contains a pedestrian. Given the difficulty of this task, many classifiers have been proposed during the last fifteen years. Among them, the so-called (deformable) part-based classifiers including multi-view modeling are usually top ranked in accuracy. Training such classifiers is not trivial since a proper aspect clustering and spatial part alignment of the pedestrian training samples are crucial for obtaining an accurate classifier. In this paper, first we perform automatic aspect clustering and part alignment by using virtual-world pedestrians, i.e., human annotations are not required. Second, we use a mixture-of-parts approach that allows part sharing among different aspects. Third, these proposals are integrated in a learning framework which also allows to incorporate real-world training data to perform domain adaptation between virtual- and real-world cameras. Overall, the obtained results on four popular on-board datasets show that our proposal clearly outperforms the state-of-the-art deformable part-based detector known as latent SVM. ER