Naveen Onkarappa, & Angel Sappa. (2012). An Empirical Study on Optical Flow Accuracy Depending on Vehicle Speed. In IEEE Intelligent Vehicles Symposium (pp. 1138–1143). IEEE Xplore.
Abstract: Driver assistance and safety systems are getting attention nowadays towards automatic navigation and safety. Optical flow as a motion estimation technique has got major roll in making these systems a reality. Towards this, in the current paper, the suitability of polar representation for optical flow estimation in such systems is demonstrated. Furthermore, the influence of individual regularization terms on the accuracy of optical flow on image sequences of different speeds is empirically evaluated. Also a new synthetic dataset of image sequences with different speeds is generated along with the ground-truth optical flow.
|
Miguel Oliveira, Angel Sappa, & V. Santos. (2012). Color Correction for Onboard Multi-camera Systems using 3D Gaussian Mixture Models. In IEEE Intelligent Vehicles Symposium (pp. 299–303). IEEE Xplore.
Abstract: The current paper proposes a novel color correction approach for onboard multi-camera systems. It works by segmenting the given images into several regions. A probabilistic segmentation framework, using 3D Gaussian Mixture Models, is proposed. Regions are used to compute local color correction functions, which are then combined to obtain the final corrected image. An image data set of road scenarios is used to establish a performance comparison of the proposed method with other seven well known color correction algorithms. Results show that the proposed approach is the highest scoring color correction method. Also, the proposed single step 3D color space probabilistic segmentation reduces processing time over similar approaches.
|
German Ros, Jesus Martinez del Rincon, & Gines Garcia-Mateos. (2012). Articulated Particle Filter for Hand Tracking. In 21st International Conference on Pattern Recognition (pp. 3581–3585).
Abstract: This paper proposes a new version of Particle Filter, called Articulated Particle Filter – ArPF -, which has been specifically designed for an efficient sampling of hierarchical spaces, generated by articulated objects. Our approach decomposes the articulated motion into layers for efficiency purposes, making use of a careful modeling of the diffusion noise along with its propagation through the articulations. This produces an increase of accuracy and prevent for divergences. The algorithm is tested on hand tracking due to its complex hierarchical articulated nature. With this purpose, a new dataset generation tool for quantitative evaluation is also presented in this paper.
|
Jose Carlos Rubio, Joan Serrat, & Antonio Lopez. (2012). Unsupervised co-segmentation through region matching. In 25th IEEE Conference on Computer Vision and Pattern Recognition (pp. 749–756). IEEE Xplore.
Abstract: Co-segmentation is defined as jointly partitioning multiple images depicting the same or similar object, into foreground and background. Our method consists of a multiple-scale multiple-image generative model, which jointly estimates the foreground and background appearance distributions from several images, in a non-supervised manner. In contrast to other co-segmentation methods, our approach does not require the images to have similar foregrounds and different backgrounds to function properly. Region matching is applied to exploit inter-image information by establishing correspondences between the common objects that appear in the scene. Moreover, computing many-to-many associations of regions allow further applications, like recognition of object parts across images. We report results on iCoseg, a challenging dataset that presents extreme variability in camera viewpoint, illumination and object deformations and poses. We also show that our method is robust against large intra-class variability in the MSRC database.
|
Jose Carlos Rubio, Joan Serrat, & Antonio Lopez. (2012). Multiple target tracking and identity linking under split, merge and occlusion of targets and observations. In 1st International Conference on Pattern Recognition Applications and Methods.
|
Ferran Diego, G.D. Evangelidis, & Joan Serrat. (2012). Night-time outdoor surveillance by mobile cameras. In 1st International Conference on Pattern Recognition Applications and Methods (Vol. 2, pp. 365–371).
Abstract: This paper addresses the problem of video surveillance by mobile cameras. We present a method that allows online change detection in night-time outdoor surveillance. Because of the camera movement, background frames are not available and must be “localized” in former sequences and registered with the current frames. To this end, we propose a Frame Localization And Registration (FLAR) approach that solves the problem efficiently. Frames of former sequences define a database which is queried by current frames in turn. To quickly retrieve nearest neighbors, database is indexed through a visual dictionary method based on the SURF descriptor. Furthermore, the frame localization is benefited by a temporal filter that exploits the temporal coherence of videos. Next, the recently proposed ECC alignment scheme is used to spatially register the synchronized frames. Finally, change detection methods apply to aligned frames in order to mark suspicious areas. Experiments with real night sequences recorded by in-vehicle cameras demonstrate the performance of the proposed method and verify its efficiency and effectiveness against other methods.
|
Angel Sappa, David Geronimo, Fadi Dornaika, Mohammad Rouhani, & Antonio Lopez. (2012). Moving object detection from mobile platforms using stereo data registration. In Marek R. Ogiela, & Lakhmi C. Jain (Eds.), Computational Intelligence paradigms in advanced pattern classification (Vol. 386, pp. 25–37). Springer Berlin Heidelberg.
Abstract: This chapter describes a robust approach for detecting moving objects from on-board stereo vision systems. It relies on a feature point quaternion-based registration, which avoids common problems that appear when computationally expensive iterative-based algorithms are used on dynamic environments. The proposed approach consists of three main stages. Initially, feature points are extracted and tracked through consecutive 2D frames. Then, a RANSAC based approach is used for registering two point sets, with known correspondences in the 3D space. The computed 3D rigid displacement is used to map two consecutive 3D point clouds into the same coordinate system by means of the quaternion method. Finally, moving objects correspond to those areas with large 3D registration errors. Experimental results show the viability of the proposed approach to detect moving objects like vehicles or pedestrians in different urban scenarios.
Keywords: pedestrian detection
|
Angel Sappa, & George A. Triantafyllid. (2012). Computer Graphics and Imaging.
|
Karel Paleček, David Geronimo, & Frederic Lerasle. (2012). Pre-attention cues for person detection. In Cognitive Behavioural Systems, COST 2102 International Training School (pp. 225–235). LNCS. Springer Berlin Heidelberg.
Abstract: Current state-of-the-art person detectors have been proven reliable and achieve very good detection rates. However, the performance is often far from real time, which limits their use to low resolution images only. In this paper, we deal with candidate window generation problem for person detection, i.e. we want to reduce the computational complexity of a person detector by reducing the number of regions that has to be evaluated. We base our work on Alexe’s paper [1], which introduced several pre-attention cues for generic object detection. We evaluate these cues in the context of person detection and show that their performance degrades rapidly for scenes containing multiple objects of interest such as pictures from urban environment. We extend this set by new cues, which better suits our class-specific task. The cues are designed to be simple and efficient, so that they can be used in the pre-attention phase of a more complex sliding window based person detector.
|
Arnau Ramisa, David Aldavert, Shrihari Vasudevan, Ricardo Toledo, & Ramon Lopez de Mantaras. (2012). Evaluation of Three Vision Based Object Perception Methods for a Mobile Robot. JIRC - Journal of Intelligent and Robotic Systems, 68(2), 185–208.
Abstract: This paper addresses visual object perception applied to mobile robotics. Being able to perceive household objects in unstructured environments is a key capability in order to make robots suitable to perform complex tasks in home environments. However, finding a solution for this task is daunting: it requires the ability to handle the variability in image formation in a moving camera with tight time constraints. The paper brings to attention some of the issues with applying three state of the art object recognition and detection methods in a mobile robotics scenario, and proposes methods to deal with windowing/segmentation. Thus, this work aims at evaluating the state-of-the-art in object perception in an attempt to develop a lightweight solution for mobile robotics use/research in typical indoor settings.
|
Jose Carlos Rubio, Joan Serrat, & Antonio Lopez. (2012). Video Co-segmentation. In 11th Asian Conference on Computer Vision (Vol. 7725, pp. 13–24). LNCS. Springer Berlin Heidelberg.
Abstract: Segmentation of a single image is in general a highly underconstrained problem. A frequent approach to solve it is to somehow provide prior knowledge or constraints on how the objects of interest look like (in terms of their shape, size, color, location or structure). Image co-segmentation trades the need for such knowledge for something much easier to obtain, namely, additional images showing the object from other viewpoints. Now the segmentation problem is posed as one of differentiating the similar object regions in all the images from the more varying background. In this paper, for the first time, we extend this approach to video segmentation: given two or more video sequences showing the same object (or objects belonging to the same class) moving in a similar manner, we aim to outline its region in all the frames. In addition, the method works in an unsupervised manner, by learning to segment at testing time. We compare favorably with two state-of-the-art methods on video segmentation and report results on benchmark videos.
|
Cristhian Aguilera, Fernando Barrera, Felipe Lumbreras, Angel Sappa, & Ricardo Toledo. (2012). Multispectral Image Feature Points. SENS - Sensors, 12(9), 12661–12672.
Abstract: Far-Infrared and Visible Spectrum images. It allows matching interest points on images of the same scene but acquired in different spectral bands. Initially, points of interest are detected on both images through a SIFT-like based scale space representation. Then, these points are characterized using an Edge Oriented Histogram (EOH) descriptor. Finally, points of interest from multispectral images are matched by finding nearest couples using the information from the descriptor. The provided experimental results and comparisons with similar methods show both the validity of the proposed approach as well as the improvements it offers with respect to the current state-of-the-art.
Keywords: multispectral image descriptor; color and infrared images; feature point descriptor
|
Fernando Barrera, Felipe Lumbreras, & Angel Sappa. (2012). Multimodal Stereo Vision System: 3D Data Extraction and Algorithm Evaluation. J-STSP - IEEE Journal of Selected Topics in Signal Processing, 6(5), 437–446.
Abstract: This paper proposes an imaging system for computing sparse depth maps from multispectral images. A special stereo head consisting of an infrared and a color camera defines the proposed multimodal acquisition system. The cameras are rigidly attached so that their image planes are parallel. Details about the calibration and image rectification procedure are provided. Sparse disparity maps are obtained by the combined use of mutual information enriched with gradient information. The proposed approach is evaluated using a Receiver Operating Characteristics curve. Furthermore, a multispectral dataset, color and infrared images, together with their corresponding ground truth disparity maps, is generated and used as a test bed. Experimental results in real outdoor scenarios are provided showing its viability and that the proposed approach is not restricted to a specific domain.
|
Cristhian Aguilera, M.Ramos, & Angel Sappa. (2012). Simulated Annealing: A Novel Application of Image Processing in the Wood Area. In Marcos de Sales Guerra Tsuzuki (Ed.), Simulated Annealing – Advances, Applications and Hybridizations (pp. 91–104).
|
Monica Piñol, Angel Sappa, & Ricardo Toledo. (2012). MultiTable Reinforcement for Visual Object Recognition. In 4th International Conference on Signal and Image Processing (Vol. 221, pp. 469–480). LNCS. Springer India.
Abstract: This paper presents a bag of feature based method for visual object recognition. Our contribution is focussed on the selection of the best feature descriptor. It is implemented by using a novel multi-table reinforcement learning method that selects among five of classical descriptors (i.e., Spin, SIFT, SURF, C-SIFT and PHOW) the one that best describes each image. Experimental results and comparisons are provided showing the improvements achieved with the proposed approach.
|