%0 Conference Proceedings %T Unsupervised Domain Adaptation of Virtual and Real Worlds for Pedestrian Detection %A David Vazquez %A Antonio Lopez %A Daniel Ponsa %B 21st International Conference on Pattern Recognition %D 2012 %I IEEE %C Tsukuba Science City, JAPAN %@ 1051-4651 %@ 978-1-4673-2216-4 %F David Vazquez2012 %O ADAS %O exported from refbase (http://refbase.cvc.uab.es/show.php?record=1981), last updated on Sun, 15 Feb 2015 22:42:00 +0100 %X Vision-based object detectors are crucial for different applications. They rely on learnt object models. Ideally, we would like to deploy our vision system in the scenario where it must operate, and lead it to self-learn how to distinguish the objects of interest, i.e., without human intervention. However, the learning of each object model requires labelled samples collected through a tiresome manual process. For instance, we are interested in exploring the self-training of a pedestrian detector for driver assistance systems. Our first approach to avoid manual labelling consisted in the use of samples coming from realistic computer graphics, so that their labels are automatically available [12]. This would make possible the desired self-training of our pedestrian detector. However, as we showed in [14], between virtual and real worlds it may be a dataset shift. In order to overcome it, we propose the use of unsupervised domain adaptation techniques that avoid human intervention during the adaptation process. In particular, this paper explores the use of the transductive SVM (T-SVM) learning algorithm in order to adapt virtual and real worlds for pedestrian detection (Fig. 1). %K Pedestrian Detection %K Domain Adaptation %K Virtual worlds %U http://refbase.cvc.uab.es/files/VLP2012a.pdf %P 3492-3495