%0 Journal Article %T Virtual and Real World Adaptation for Pedestrian Detection %A David Vazquez %A Javier Marin %A Antonio Lopez %A Daniel Ponsa %A David Geronimo %J IEEE Transactions on Pattern Analysis and Machine Intelligence %D 2014 %V 36 %N 4 %@ 0162-8828 %F David Vazquez2014 %O ADAS; 600.057; 600.054; 600.076 %O exported from refbase (http://refbase.cvc.uab.es/show.php?record=2275), last updated on Tue, 24 Feb 2015 10:27:08 +0100 %X Pedestrian detection is of paramount interest for many applications. Most promising detectors rely on discriminatively learnt classifiers, i.e., trained with annotated samples. However, the annotation step is a human intensive and subjective task worth to be minimized. By using virtual worlds we can automatically obtain precise and rich annotations. Thus, we face the question: can a pedestrian appearance model learnt in realistic virtual worlds work successfully for pedestrian detection in realworld images?. Conducted experiments show that virtual-world based training can provide excellent testing accuracy in real world, but it can also suffer the dataset shift problem as real-world based training does. Accordingly, we have designed a domain adaptation framework, V-AYLA, in which we have tested different techniques to collect a few pedestrian samples from the target domain (real world) and combine them with the many examples of the source domain (virtual world) in order to train a domain adapted pedestrian classifier that will operate in the target domain. V-AYLA reports the same detection accuracy than when training with many human-provided pedestrian annotations and testing with real-world images of the same domain. To the best of our knowledge, this is the first work demonstrating adaptation of virtual and real worlds for developing an object detector. %K Domain Adaptation %K Pedestrian Detection %U http://refbase.cvc.uab.es/files/vml2013.pdf %U http://dx.doi.org/10.1109/TPAMI.2013.163 %P 797-809