%0 Journal Article %T Factorized appearances for object detection %A Josep M. Gonfaus %A Marco Pedersoli %A Jordi Gonzalez %A Andrea Vedaldi %A Xavier Roca %J Computer Vision and Image Understanding %D 2015 %V 138 %F Josep M. Gonfaus2015 %O ISE; 600.063; 600.078 %O exported from refbase (http://refbase.cvc.uab.es/show.php?record=2705), last updated on Wed, 27 Apr 2016 11:46:05 +0200 %X Deformable object models capture variations in an object’s appearance that can be represented as image deformations. Other effects such as out-of-plane rotations, three-dimensional articulations, and self-occlusions are often captured by considering mixture of deformable models, one per object aspect. A more scalable approach is representing instead the variations at the level of the object parts, applying the concept of a mixture locally. Combining a few part variations can in fact cheaply generate a large number of global appearances.A limited version of this idea was proposed by Yang and Ramanan [1], for human pose dectection. In this paper we apply it to the task of generic object category detection and extend it in several ways. First, we propose a model for the relationship between part appearances more general than the tree of Yang and Ramanan [1], which is more suitable for generic categories. Second, we treat part locations as well as their appearance as latent variables so that training does not need part annotations but only the object bounding boxes. Third, we modify the weakly-supervised learning of Felzenszwalb et al. and Girshick et al. [2], [3] to handle a significantly more complex latent structure.Our model is evaluated on standard object detection benchmarks and is found to improve over existing approaches, yielding state-of-the-art results for several object categories. %K Object recognition %K Deformable part models %K Learning and sharing parts %K Discovering discriminative parts %U http://refbase.cvc.uab.es/files/GPG2015.pdf %U http://dx.doi.org/10.1016/j.cviu.2015.04.008 %P 92–101