|
Diego Cheda, Daniel Ponsa and Antonio Lopez. 2012. Monocular Depth-based Background Estimation. 7th International Conference on Computer Vision Theory and Applications.323–328.
Abstract: In this paper, we address the problem of reconstructing the background of a scene from a video sequence with occluding objects. The images are taken by hand-held cameras. Our method composes the background by selecting the appropriate pixels from previously aligned input images. To do that, we minimize a cost function that penalizes the deviations from the following assumptions: background represents objects whose distance to the camera is maximal, and background objects are stationary. Distance information is roughly obtained by a supervised learning approach that allows us to distinguish between close and distant image regions. Moving foreground objects are filtered out by using stationariness and motion boundary constancy measurements. The cost function is minimized by a graph cuts method. We demonstrate the applicability of our approach to recover an occlusion-free background in a set of sequences.
|
|
|
Diego Cheda, Daniel Ponsa and Antonio Lopez. 2012. Pedestrian Candidates Generation using Monocular Cues. IEEE Intelligent Vehicles Symposium. IEEE Xplore, 7–12.
Abstract: Common techniques for pedestrian candidates generation (e.g., sliding window approaches) are based on an exhaustive search over the image. This implies that the number of windows produced is huge, which translates into a significant time consumption in the classification stage. In this paper, we propose a method that significantly reduces the number of windows to be considered by a classifier. Our method is a monocular one that exploits geometric and depth information available on single images. Both representations of the world are fused together to generate pedestrian candidates based on an underlying model which is focused only on objects standing vertically on the ground plane and having certain height, according with their depths on the scene. We evaluate our algorithm on a challenging dataset and demonstrate its application for pedestrian detection, where a considerable reduction in the number of candidate windows is reached.
Keywords: pedestrian detection
|
|
|
Fernando Barrera, Felipe Lumbreras, Cristhian Aguilera and Angel Sappa. 2012. Planar-Based Multispectral Stereo. 11th Quantitative InfraRed Thermography.
|
|
|
Cristhian Aguilera, Fernando Barrera, Angel Sappa and Ricardo Toledo. 2012. A Novel SIFT-Like-Based Approach for FIR-VS Images Registration. 11th Quantitative InfraRed Thermography.
|
|
|
Monica Piñol, Angel Sappa, Angeles Lopez and Ricardo Toledo. 2012. Feature Selection Based on Reinforcement Learning for Object Recognition. Adaptive Learning Agents Workshop.33–39.
|
|
|
German Ros, Angel Sappa, Daniel Ponsa and Antonio Lopez. 2012. Visual SLAM for Driverless Cars: A Brief Survey. IEEE Workshop on Navigation, Perception, Accurate Positioning and Mapping for Intelligent Vehicles.
|
|
|
Naveen Onkarappa and Angel Sappa. 2012. An Empirical Study on Optical Flow Accuracy Depending on Vehicle Speed. IEEE Intelligent Vehicles Symposium. IEEE Xplore, 1138–1143.
Abstract: Driver assistance and safety systems are getting attention nowadays towards automatic navigation and safety. Optical flow as a motion estimation technique has got major roll in making these systems a reality. Towards this, in the current paper, the suitability of polar representation for optical flow estimation in such systems is demonstrated. Furthermore, the influence of individual regularization terms on the accuracy of optical flow on image sequences of different speeds is empirically evaluated. Also a new synthetic dataset of image sequences with different speeds is generated along with the ground-truth optical flow.
|
|
|
Miguel Oliveira, Angel Sappa and V. Santos. 2012. Color Correction for Onboard Multi-camera Systems using 3D Gaussian Mixture Models. IEEE Intelligent Vehicles Symposium. IEEE Xplore, 299–303.
Abstract: The current paper proposes a novel color correction approach for onboard multi-camera systems. It works by segmenting the given images into several regions. A probabilistic segmentation framework, using 3D Gaussian Mixture Models, is proposed. Regions are used to compute local color correction functions, which are then combined to obtain the final corrected image. An image data set of road scenarios is used to establish a performance comparison of the proposed method with other seven well known color correction algorithms. Results show that the proposed approach is the highest scoring color correction method. Also, the proposed single step 3D color space probabilistic segmentation reduces processing time over similar approaches.
|
|
|
German Ros, Jesus Martinez del Rincon and Gines Garcia-Mateos. 2012. Articulated Particle Filter for Hand Tracking. 21st International Conference on Pattern Recognition.3581–3585.
Abstract: This paper proposes a new version of Particle Filter, called Articulated Particle Filter – ArPF -, which has been specifically designed for an efficient sampling of hierarchical spaces, generated by articulated objects. Our approach decomposes the articulated motion into layers for efficiency purposes, making use of a careful modeling of the diffusion noise along with its propagation through the articulations. This produces an increase of accuracy and prevent for divergences. The algorithm is tested on hand tracking due to its complex hierarchical articulated nature. With this purpose, a new dataset generation tool for quantitative evaluation is also presented in this paper.
|
|
|
Jose Carlos Rubio, Joan Serrat and Antonio Lopez. 2012. Unsupervised co-segmentation through region matching. 25th IEEE Conference on Computer Vision and Pattern Recognition. IEEE Xplore, 749–756.
Abstract: Co-segmentation is defined as jointly partitioning multiple images depicting the same or similar object, into foreground and background. Our method consists of a multiple-scale multiple-image generative model, which jointly estimates the foreground and background appearance distributions from several images, in a non-supervised manner. In contrast to other co-segmentation methods, our approach does not require the images to have similar foregrounds and different backgrounds to function properly. Region matching is applied to exploit inter-image information by establishing correspondences between the common objects that appear in the scene. Moreover, computing many-to-many associations of regions allow further applications, like recognition of object parts across images. We report results on iCoseg, a challenging dataset that presents extreme variability in camera viewpoint, illumination and object deformations and poses. We also show that our method is robust against large intra-class variability in the MSRC database.
|
|