|
David Vazquez, Antonio Lopez, Daniel Ponsa and Javier Marin. 2011. Cool world: domain adaptation of virtual and real worlds for human detection using active learning. NIPS Domain Adaptation Workshop: Theory and Application. Granada, Spain.
Abstract: Image based human detection is of paramount interest for different applications. The most promising human detectors rely on discriminatively learnt classifiers, i.e., trained with labelled samples. However, labelling is a manual intensive task, especially in cases like human detection where it is necessary to provide at least bounding boxes framing the humans for training. To overcome such problem, in Marin et al. we have proposed the use of a virtual world where the labels of the different objects are obtained automatically. This means that the human models (classifiers) are learnt using the appearance of realistic computer graphics. Later, these models are used for human detection in images of the real world. The results of this technique are surprisingly good. However, these are not always as good as the classical approach of training and testing with data coming from the same camera and the same type of scenario. Accordingly, in Vazquez et al. we cast the problem as one of supervised domain adaptation. In doing so, we assume that a small amount of manually labelled samples from real-world images is required. To collect these labelled samples we use an active learning technique. Thus, ultimately our human model is learnt by the combination of virtual- and real-world labelled samples which, to the best of our knowledge, was not done before. Here, we term such combined space cool world. In this extended abstract we summarize our proposal, and include quantitative results from Vazquez et al. showing its validity.
Keywords: Pedestrian Detection; Virtual; Domain Adaptation; Active Learning
|
|
|
Arnau Ramisa, David Aldavert, Shrihari Vasudevan, Ricardo Toledo and Ramon Lopez de Mantaras. 2011. The IIIA30 MObile Robot Object Recognition Datset. 11th Portuguese Robotics Open.
Abstract: Object perception is a key feature in order to make mobile robots able to perform high-level tasks. However, research aimed at addressing the constraints and limitations encountered in a mobile robotics scenario, like low image resolution, motion blur or tight computational constraints, is still very scarce. In order to facilitate future research in this direction, in this work we present an object detection and recognition dataset acquired using a mobile robotic platform. As a baseline for the dataset, we evaluated the cascade of weak classifiers object detection method from Viola and Jones.
|
|
|
Mohammad Rouhani and Angel Sappa. 2011. Implicit B-Spline Fitting Using the 3L Algorithm. 18th IEEE International Conference on Image Processing.893–896.
|
|
|
Marçal Rusiñol, David Aldavert, Ricardo Toledo and Josep Llados. 2011. Browsing Heterogeneous Document Collections by a Segmentation-Free Word Spotting Method. 11th International Conference on Document Analysis and Recognition.63–67.
Abstract: In this paper, we present a segmentation-free word spotting method that is able to deal with heterogeneous document image collections. We propose a patch-based framework where patches are represented by a bag-of-visual-words model powered by SIFT descriptors. A later refinement of the feature vectors is performed by applying the latent semantic indexing technique. The proposed method performs well on both handwritten and typewritten historical document images. We have also tested our method on documents written in non-Latin scripts.
|
|
|
G.D. Evangelidis, Ferran Diego, Joan Serrat and Antonio Lopez. 2011. Slice Matching for Accurate Spatio-Temporal Alignment. In ICCV Workshop on Visual Surveillance.
Abstract: Video synchronization and alignment is a rather recent topic in computer vision. It usually deals with the problem of aligning sequences recorded simultaneously by static, jointly- or independently-moving cameras. In this paper, we investigate the more difficult problem of matching videos captured at different times from independently-moving cameras, whose trajectories are approximately coincident or parallel. To this end, we propose a novel method that pixel-wise aligns videos and allows thus to automatically highlight their differences. This primarily aims at visual surveillance but the method can be adopted as is by other related video applications, like object transfer (augmented reality) or high dynamic range video. We build upon a slice matching scheme to first synchronize the sequences, while we develop a spatio-temporal alignment scheme to spatially register corresponding frames and refine the temporal mapping. We investigate the performance of the proposed method on videos recorded from vehicles driven along different types of roads and compare with related previous works.
Keywords: video alignment
|
|
|
G. Roig, Xavier Boix, F. de la Torre, Joan Serrat and C. Vilella. 2011. Hierarchical CRF with product label spaces for parts-based Models. IEEE Conference on Automatic Face and Gesture Recognition.
Abstract: Non-rigid object detection is a challenging an open research problem in computer vision. It is a critical part in many applications such as image search, surveillance, human-computer interaction or image auto-annotation. Most successful approaches to non-rigid object detection make use of part-based models. In particular, Conditional Random Fields (CRF) have been successfully embedded into a discriminative parts-based model framework due to its effectiveness for learning and inference (usually based on a tree structure). However, CRF-based approaches do not incorporate global constraints and only model pairwise interactions. This is especially important when modeling object classes that may have complex parts interactions (e.g. facial features or body articulations), because neglecting them yields an oversimplified model with suboptimal performance. To overcome this limitation, this paper proposes a novel hierarchical CRF (HCRF). The main contribution is to build a hierarchy of part combinations by extending the label set to a hierarchy of product label spaces. In order to keep the inference computation tractable, we propose an effective method to reduce the new label set. We test our method on two applications: facial feature detection on the Multi-PIE database and human pose estimation on the Buffy dataset.
|
|
|
Yainuvis Socarras, Sebastian Ramos, David Vazquez, Antonio Lopez and Theo Gevers. 2013. Adapting Pedestrian Detection from Synthetic to Far Infrared Images. ICCV Workshop on Visual Domain Adaptation and Dataset Bias. Sydney, Australy.
Abstract: We present different techniques to adapt a pedestrian classifier trained with synthetic images and the corresponding automatically generated annotations to operate with far infrared (FIR) images. The information contained in this kind of images allow us to develop a robust pedestrian detector invariant to extreme illumination changes.
Keywords: Domain Adaptation; Far Infrared; Pedestrian Detection
|
|
|
Patricia Marquez, Debora Gil and Aura Hernandez-Sabate. 2013. Evaluation of the Capabilities of Confidence Measures for Assessing Optical Flow Quality. ICCV Workshop on Computer Vision in Vehicle Technology: From Earth to Mars.624–631.
Abstract: Assessing Optical Flow (OF) quality is essential for its further use in reliable decision support systems. The absence of ground truth in such situations leads to the computation of OF Confidence Measures (CM) obtained from either input or output data. A fair comparison across the capabilities of the different CM for bounding OF error is required in order to choose the best OF-CM pair for discarding points where OF computation is not reliable. This paper presents a statistical probabilistic framework for assessing the quality of a given CM. Our quality measure is given in terms of the percentage of pixels whose OF error bound can not be determined by CM values. We also provide statistical tools for the computation of CM values that ensures a given accuracy of the flow field.
|
|
|
Jiaolong Xu, Sebastian Ramos, Xu Hu, David Vazquez and Antonio Lopez. 2013. Multi-task Bilinear Classifiers for Visual Domain Adaptation. Advances in Neural Information Processing Systems Workshop.
Abstract: We propose a method that aims to lessen the significant accuracy degradation
that a discriminative classifier can suffer when it is trained in a specific domain (source domain) and applied in a different one (target domain). The principal reason for this degradation is the discrepancies in the distribution of the features that feed the classifier in different domains. Therefore, we propose a domain adaptation method that maps the features from the different domains into a common subspace and learns a discriminative domain-invariant classifier within it. Our algorithm combines bilinear classifiers and multi-task learning for domain adaptation.
The bilinear classifier encodes the feature transformation and classification
parameters by a matrix decomposition. In this way, specific feature transformations for multiple domains and a shared classifier are jointly learned in a multi-task learning framework. Focusing on domain adaptation for visual object detection, we apply this method to the state-of-the-art deformable part-based model for cross domain pedestrian detection. Experimental results show that our method significantly avoids the domain drift and improves the accuracy when compared to several baselines.
Keywords: Domain Adaptation; Pedestrian Detection; ADAS
|
|
|
David Geronimo, Frederic Lerasle and Antonio Lopez. 2012. State-driven particle filter for multi-person tracking. In J. Blanc-Talon et al., ed. 11th International Conference on Advanced Concepts for Intelligent Vision Systems. Heidelberg, Springer, 467–478.
Abstract: Multi-person tracking can be exploited in applications such as driver assistance, surveillance, multimedia and human-robot interaction. With the help of human detectors, particle filters offer a robust method able to filter noisy detections and provide temporal coherence. However, some traditional problems such as occlusions with other targets or the scene, temporal drifting or even the lost targets detection are rarely considered, making the systems performance decrease. Some authors propose to overcome these problems using heuristics not explained
and formalized in the papers, for instance by defining exceptions to the model updating depending on tracks overlapping. In this paper we propose to formalize these events by the use of a state-graph, defining the current state of the track (e.g., potential , tracked, occluded or lost) and the transitions between states in an explicit way. This approach has the advantage of linking track actions such as the online underlying models updating, which gives flexibility to the system. It provides an explicit representation to adapt the multiple parallel trackers depending on the context, i.e., each track can make use of a specific filtering strategy, dynamic model, number of particles, etc. depending on its state. We implement this technique in a single-camera multi-person tracker and test
it in public video sequences.
Keywords: human tracking
|
|