|
G.D. Evangelidis, Ferran Diego, Joan Serrat and Antonio Lopez. 2011. Slice Matching for Accurate Spatio-Temporal Alignment. In ICCV Workshop on Visual Surveillance.
Abstract: Video synchronization and alignment is a rather recent topic in computer vision. It usually deals with the problem of aligning sequences recorded simultaneously by static, jointly- or independently-moving cameras. In this paper, we investigate the more difficult problem of matching videos captured at different times from independently-moving cameras, whose trajectories are approximately coincident or parallel. To this end, we propose a novel method that pixel-wise aligns videos and allows thus to automatically highlight their differences. This primarily aims at visual surveillance but the method can be adopted as is by other related video applications, like object transfer (augmented reality) or high dynamic range video. We build upon a slice matching scheme to first synchronize the sequences, while we develop a spatio-temporal alignment scheme to spatially register corresponding frames and refine the temporal mapping. We investigate the performance of the proposed method on videos recorded from vehicles driven along different types of roads and compare with related previous works.
Keywords: video alignment
|
|
|
Gemma Roig, Xavier Boix, F. de la Torre, Joan Serrat and C. Vilella. 2011. Hierarchical CRF with product label spaces for parts-based Models. IEEE Conference on Automatic Face and Gesture Recognition.657–664.
Abstract: Non-rigid object detection is a challenging an open research problem in computer vision. It is a critical part in many applications such as image search, surveillance, human-computer interaction or image auto-annotation. Most successful approaches to non-rigid object detection make use of part-based models. In particular, Conditional Random Fields (CRF) have been successfully embedded into a discriminative parts-based model framework due to its effectiveness for learning and inference (usually based on a tree structure). However, CRF-based approaches do not incorporate global constraints and only model pairwise interactions. This is especially important when modeling object classes that may have complex parts interactions (e.g. facial features or body articulations), because neglecting them yields an oversimplified model with suboptimal performance. To overcome this limitation, this paper proposes a novel hierarchical CRF (HCRF). The main contribution is to build a hierarchy of part combinations by extending the label set to a hierarchy of product label spaces. In order to keep the inference computation tractable, we propose an effective method to reduce the new label set. We test our method on two applications: facial feature detection on the Multi-PIE database and human pose estimation on the Buffy dataset.
Keywords: Shape; Computational modeling; Principal component analysis; Random variables; Color; Upper bound; Facial features
|
|
|
Yainuvis Socarras, Sebastian Ramos, David Vazquez, Antonio Lopez and Theo Gevers. 2013. Adapting Pedestrian Detection from Synthetic to Far Infrared Images. ICCV Workshop on Visual Domain Adaptation and Dataset Bias. Sydney, Australy.
Abstract: We present different techniques to adapt a pedestrian classifier trained with synthetic images and the corresponding automatically generated annotations to operate with far infrared (FIR) images. The information contained in this kind of images allow us to develop a robust pedestrian detector invariant to extreme illumination changes.
Keywords: Domain Adaptation; Far Infrared; Pedestrian Detection
|
|
|
Patricia Marquez, Debora Gil and Aura Hernandez-Sabate. 2013. Evaluation of the Capabilities of Confidence Measures for Assessing Optical Flow Quality. ICCV Workshop on Computer Vision in Vehicle Technology: From Earth to Mars.624–631.
Abstract: Assessing Optical Flow (OF) quality is essential for its further use in reliable decision support systems. The absence of ground truth in such situations leads to the computation of OF Confidence Measures (CM) obtained from either input or output data. A fair comparison across the capabilities of the different CM for bounding OF error is required in order to choose the best OF-CM pair for discarding points where OF computation is not reliable. This paper presents a statistical probabilistic framework for assessing the quality of a given CM. Our quality measure is given in terms of the percentage of pixels whose OF error bound can not be determined by CM values. We also provide statistical tools for the computation of CM values that ensures a given accuracy of the flow field.
|
|
|
Jiaolong Xu, Sebastian Ramos, Xu Hu, David Vazquez and Antonio Lopez. 2013. Multi-task Bilinear Classifiers for Visual Domain Adaptation. Advances in Neural Information Processing Systems Workshop.
Abstract: We propose a method that aims to lessen the significant accuracy degradation
that a discriminative classifier can suffer when it is trained in a specific domain (source domain) and applied in a different one (target domain). The principal reason for this degradation is the discrepancies in the distribution of the features that feed the classifier in different domains. Therefore, we propose a domain adaptation method that maps the features from the different domains into a common subspace and learns a discriminative domain-invariant classifier within it. Our algorithm combines bilinear classifiers and multi-task learning for domain adaptation.
The bilinear classifier encodes the feature transformation and classification
parameters by a matrix decomposition. In this way, specific feature transformations for multiple domains and a shared classifier are jointly learned in a multi-task learning framework. Focusing on domain adaptation for visual object detection, we apply this method to the state-of-the-art deformable part-based model for cross domain pedestrian detection. Experimental results show that our method significantly avoids the domain drift and improves the accuracy when compared to several baselines.
Keywords: Domain Adaptation; Pedestrian Detection; ADAS
|
|
|
Javier Marin, David Vazquez, Antonio Lopez, Jaume Amores and Bastian Leibe. 2013. Random Forests of Local Experts for Pedestrian Detection. 15th IEEE International Conference on Computer Vision. IEEE, 2592–2599.
Abstract: Pedestrian detection is one of the most challenging tasks in computer vision, and has received a lot of attention in the last years. Recently, some authors have shown the advantages of using combinations of part/patch-based detectors in order to cope with the large variability of poses and the existence of partial occlusions. In this paper, we propose a pedestrian detection method that efficiently combines multiple local experts by means of a Random Forest ensemble. The proposed method works with rich block-based representations such as HOG and LBP, in such a way that the same features are reused by the multiple local experts, so that no extra computational cost is needed with respect to a holistic method. Furthermore, we demonstrate how to integrate the proposed approach with a cascaded architecture in order to achieve not only high accuracy but also an acceptable efficiency. In particular, the resulting detector operates at five frames per second using a laptop machine. We tested the proposed method with well-known challenging datasets such as Caltech, ETH, Daimler, and INRIA. The method proposed in this work consistently ranks among the top performers in all the datasets, being either the best method or having a small difference with the best one.
Keywords: ADAS; Random Forest; Pedestrian Detection
|
|
|
David Vazquez, Antonio Lopez and Daniel Ponsa. 2012. Unsupervised Domain Adaptation of Virtual and Real Worlds for Pedestrian Detection. 21st International Conference on Pattern Recognition. Tsukuba Science City, JAPAN, IEEE, 3492–3495.
Abstract: Vision-based object detectors are crucial for different applications. They rely on learnt object models. Ideally, we would like to deploy our vision system in the scenario where it must operate, and lead it to self-learn how to distinguish the objects of interest, i.e., without human intervention. However, the learning of each object model requires labelled samples collected through a tiresome manual process. For instance, we are interested in exploring the self-training of a pedestrian detector for driver assistance systems. Our first approach to avoid manual labelling consisted in the use of samples coming from realistic computer graphics, so that their labels are automatically available [12]. This would make possible the desired self-training of our pedestrian detector. However, as we showed in [14], between virtual and real worlds it may be a dataset shift. In order to overcome it, we propose the use of unsupervised domain adaptation techniques that avoid human intervention during the adaptation process. In particular, this paper explores the use of the transductive SVM (T-SVM) learning algorithm in order to adapt virtual and real worlds for pedestrian detection (Fig. 1).
Keywords: Pedestrian Detection; Domain Adaptation; Virtual worlds
|
|
|
Jose Carlos Rubio, Joan Serrat, Antonio Lopez and N. Paragios. 2012. Image Contextual Representation and Matching through Hierarchies and Higher Order Graphs. 21st International Conference on Pattern Recognition.2664–2667.
Abstract: We present a region matching algorithm which establishes correspondences between regions from two segmented images. An abstract graph-based representation conceals the image in a hierarchical graph, exploiting the scene properties at two levels. First, the similarity and spatial consistency of the image semantic objects is encoded in a graph of commute times. Second, the cluttered regions of the semantic objects are represented with a shape descriptor. Many-to-many matching of regions is specially challenging due to the instability of the segmentation under slight image changes, and we explicitly handle it through high order potentials. We demonstrate the matching approach applied to images of world famous buildings, captured under different conditions, showing the robustness of our method to large variations in illumination and viewpoint.
|
|
|
Diego Cheda, Daniel Ponsa and Antonio Lopez. 2012. Monocular Egomotion Estimation based on Image Matching. 1st International Conference on Pattern Recognition Applications and Methods.425–430.
|
|
|
Diego Cheda, Daniel Ponsa and Antonio Lopez. 2012. Monocular Depth-based Background Estimation. 7th International Conference on Computer Vision Theory and Applications.323–328.
Abstract: In this paper, we address the problem of reconstructing the background of a scene from a video sequence with occluding objects. The images are taken by hand-held cameras. Our method composes the background by selecting the appropriate pixels from previously aligned input images. To do that, we minimize a cost function that penalizes the deviations from the following assumptions: background represents objects whose distance to the camera is maximal, and background objects are stationary. Distance information is roughly obtained by a supervised learning approach that allows us to distinguish between close and distant image regions. Moving foreground objects are filtered out by using stationariness and motion boundary constancy measurements. The cost function is minimized by a graph cuts method. We demonstrate the applicability of our approach to recover an occlusion-free background in a set of sequences.
|
|