|
Muhammad Anwer Rao, David Vazquez and Antonio Lopez. 2011. Color Contribution to Part-Based Person Detection in Different Types of Scenarios. In P. Real, D.D., H. Molina, A. Berciano, W. Kropatsch, ed. 14th International Conference on Computer Analysis of Images and Patterns. Berlin Heidelberg, Springer, 463–470.
Abstract: Camera-based person detection is of paramount interest due to its potential applications. The task is diffcult because the great variety of backgrounds (scenarios, illumination) in which persons are present, as well as their intra-class variability (pose, clothe, occlusion). In fact, the class person is one of the included in the popular PASCAL visual object classes (VOC) challenge. A breakthrough for this challenge, regarding person detection, is due to Felzenszwalb et al. These authors proposed a part-based detector that relies on histograms of oriented gradients (HOG) and latent support vector machines (LatSVM) to learn a model of the whole human body and its constitutive parts, as well as their relative position. Since the approach of Felzenszwalb et al. appeared new variants have been proposed, usually giving rise to more complex models. In this paper, we focus on an issue that has not attracted suficient interest up to now. In particular, we refer to the fact that HOG is usually computed from RGB color space, but other possibilities exist and deserve the corresponding investigation. In this paper we challenge RGB space with the opponent color space (OPP), which is inspired in the human vision system.We will compute the HOG on top of OPP, then we train and test the part-based human classifer by Felzenszwalb et al. using PASCAL VOC challenge protocols and person database. Our experiments demonstrate that OPP outperforms RGB. We also investigate possible differences among types of scenarios: indoor, urban and countryside. Interestingly, our experiments suggest that the beneficts of OPP with respect to RGB mainly come for indoor and countryside scenarios, those in which the human visual system was designed by evolution.
Keywords: Pedestrian Detection; Color
|
|
|
Naveen Onkarappa and Angel Sappa. 2011. Space Variant Representations for Mobile Platform Vision Applications. In P. Real, D.D., H. Molina, A. Berciano, W. Kropatsch, ed. 14th International Conference on Computer Analysis of Images and Patterns. Springer Berlin Heidelberg, 146–154.
Abstract: The log-polar space variant representation, motivated by biological vision, has been widely studied in the literature. Its data reduction and invariance properties made it useful in many vision applications. However, due to its nature, it fails in preserving features in the periphery. In the current work, as an attempt to overcome this problem, we propose a novel space-variant representation. It is evaluated and proved to be better than the log-polar representation in preserving the peripheral information, crucial for on-board mobile vision applications. The evaluation is performed by comparing log-polar and the proposed representation once they are used for estimating dense optical flow.
|
|
|
Yainuvis Socarras, David Vazquez, Antonio Lopez, David Geronimo and Theo Gevers. 2012. Improving HOG with Image Segmentation: Application to Human Detection. In J. Blanc-Talon et al., ed. 11th International Conference on Advanced Concepts for Intelligent Vision Systems. Springer Berlin Heidelberg, 178–189. (LNCS.)
Abstract: In this paper we improve the histogram of oriented gradients (HOG), a core descriptor of state-of-the-art object detection, by the use of higher-level information coming from image segmentation. The idea is to re-weight the descriptor while computing it without increasing its size. The benefits of the proposal are two-fold: (i) to improve the performance of the detector by enriching the descriptor information and (ii) take advantage of the information of image segmentation, which in fact is likely to be used in other stages of the detection system such as candidate generation or refinement.
We test our technique in the INRIA person dataset, which was originally developed to test HOG, embedding it in a human detection system. The well-known segmentation method, mean-shift (from smaller to larger super-pixels), and different methods to re-weight the original descriptor (constant, region-luminance, color or texture-dependent) has been evaluated. We achieve performance improvements of 4:47% in detection rate through the use of differences of color between contour pixel neighborhoods as re-weighting function.
Keywords: Segmentation; Pedestrian Detection
|
|
|
Fernando Barrera, Felipe Lumbreras and Angel Sappa. 2012. Evaluation of Similarity Functions in Multimodal Stereo. 9th International Conference on Image Analysis and Recognition. Springer Berlin Heidelberg, 320–329. (LNCS.)
Abstract: This paper presents an evaluation framework for multimodal stereo matching, which allows to compare the performance of four similarity functions. Additionally, it presents details of a multimodal stereo head that supply thermal infrared and color images, as well as, aspects of its calibration and rectification. The pipeline includes a novel method for the disparity selection, which is suitable for evaluating the similarity functions. Finally, a benchmark for comparing different initializations of the proposed framework is presented. Similarity functions are based on mutual information, gradient orientation and scale space representations. Their evaluation is performed using two metrics: i) disparity error, and ii) number of correct matches on planar regions. In addition to the proposed evaluation, the current paper also shows that 3D sparse representations can be recovered from such a multimodal stereo head.
Keywords: Aveiro, Portugal
|
|
|
Miguel Oliveira, Angel Sappa and V. Santos. 2012. Color Correction using 3D Gaussian Mixture Models. 9th International Conference on Image Analysis and Recognition. Springer Berlin Heidelberg, 97–106. (LNCS.)
Abstract: The current paper proposes a novel color correction approach based on a probabilistic segmentation framework by using 3D Gaussian Mixture Models. Regions are used to compute local color correction functions, which are then combined to obtain the final corrected image. The proposed approach is evaluated using both a recently published metric and two large data sets composed of seventy images. The evaluation is performed by comparing our algorithm with eight well known color correction algorithms. Results show that the proposed approach is the highest scoring color correction method. Also, the proposed single step 3D color space probabilistic segmentation reduces processing time over similar approaches.
|
|
|
Jose Manuel Alvarez, Theo Gevers, Y. LeCun and Antonio Lopez. 2012. Road Scene Segmentation from a Single Image. 12th European Conference on Computer Vision. Springer Berlin Heidelberg, 376–389. (LNCS.)
Abstract: Road scene segmentation is important in computer vision for different applications such as autonomous driving and pedestrian detection. Recovering the 3D structure of road scenes provides relevant contextual information to improve their understanding.
In this paper, we use a convolutional neural network based algorithm to learn features from noisy labels to recover the 3D scene layout of a road image. The novelty of the algorithm relies on generating training labels by applying an algorithm trained on a general image dataset to classify on–board images. Further, we propose a novel texture descriptor based on a learned color plane fusion to obtain maximal uniformity in road areas. Finally, acquired (off–line) and current (on–line) information are combined to detect road areas in single images.
From quantitative and qualitative experiments, conducted on publicly available datasets, it is concluded that convolutional neural networks are suitable for learning 3D scene layout from noisy labels and provides a relative improvement of 7% compared to the baseline. Furthermore, combining color planes provides a statistical description of road areas that exhibits maximal uniformity and provides a relative improvement of 8% compared to the baseline. Finally, the improvement is even bigger when acquired and current information from a single image are combined
Keywords: road detection
|
|
|
Naveen Onkarappa and Angel Sappa. 2013. Laplacian Derivative based Regularization for Optical Flow Estimation in Driving Scenario. 15th International Conference on Computer Analysis of Images and Patterns. Springer Berlin Heidelberg, 483–490. (LNCS.)
Abstract: Existing state of the art optical flow approaches, which are evaluated on standard datasets such as Middlebury, not necessarily have a similar performance when evaluated on driving scenarios. This drop on performance is due to several challenges arising on real scenarios during driving. Towards this direction, in this paper, we propose a modification to the regularization term in a variational optical flow formulation, that notably improves the results, specially in driving scenarios. The proposed modification consists on using the Laplacian derivatives of flow components in the regularization term instead of gradients of flow components. We show the improvements in results on a standard real image sequences dataset (KITTI).
Keywords: Optical flow; regularization; Driver Assistance Systems; Performance Evaluation
|
|
|
Karel Paleček, David Geronimo and Frederic Lerasle. 2012. Pre-attention cues for person detection. Cognitive Behavioural Systems, COST 2102 International Training School. Springer Berlin Heidelberg, 225–235. (LNCS.)
Abstract: Current state-of-the-art person detectors have been proven reliable and achieve very good detection rates. However, the performance is often far from real time, which limits their use to low resolution images only. In this paper, we deal with candidate window generation problem for person detection, i.e. we want to reduce the computational complexity of a person detector by reducing the number of regions that has to be evaluated. We base our work on Alexe’s paper [1], which introduced several pre-attention cues for generic object detection. We evaluate these cues in the context of person detection and show that their performance degrades rapidly for scenes containing multiple objects of interest such as pictures from urban environment. We extend this set by new cues, which better suits our class-specific task. The cues are designed to be simple and efficient, so that they can be used in the pre-attention phase of a more complex sliding window based person detector.
|
|
|
Jose Carlos Rubio, Joan Serrat and Antonio Lopez. 2012. Video Co-segmentation. 11th Asian Conference on Computer Vision. Springer Berlin Heidelberg, 13–24. (LNCS.)
Abstract: Segmentation of a single image is in general a highly underconstrained problem. A frequent approach to solve it is to somehow provide prior knowledge or constraints on how the objects of interest look like (in terms of their shape, size, color, location or structure). Image co-segmentation trades the need for such knowledge for something much easier to obtain, namely, additional images showing the object from other viewpoints. Now the segmentation problem is posed as one of differentiating the similar object regions in all the images from the more varying background. In this paper, for the first time, we extend this approach to video segmentation: given two or more video sequences showing the same object (or objects belonging to the same class) moving in a similar manner, we aim to outline its region in all the frames. In addition, the method works in an unsupervised manner, by learning to segment at testing time. We compare favorably with two state-of-the-art methods on video segmentation and report results on benchmark videos.
|
|
|
Mohammad Rouhani and Angel Sappa. 2012. Non-Rigid Shape Registration: A Single Linear Least Squares Framework. 12th European Conference on Computer Vision. Springer Berlin Heidelberg, 264–277. (LNCS.)
Abstract: This paper proposes a non-rigid registration formulation capturing both global and local deformations in a single framework. This formulation is based on a quadratic estimation of the registration distance together with a quadratic regularization term. Hence, the optimal transformation parameters are easily obtained by solving a liner system of equations, which guarantee a fast convergence. Experimental results with challenging 2D and 3D shapes are presented to show the validity of the proposed framework. Furthermore, comparisons with the most relevant approaches are provided.
|
|