|
Angel Sappa, David Geronimo, Fadi Dornaika and Antonio Lopez. 2006. Real Time Vehicle Pose Using On-Board Stereo Vision System. International Conference on Image Analysis and Recognition.205–216.
Abstract: This paper presents a robust technique for a real time estimation of both camera’s position and orientation—referred as pose. A commercial stereo vision system is used. Unlike previous approaches, it can be used either for urban or highway scenarios. The proposed technique consists of two stages. Initially, a compact 2D representation of the original 3D data points is computed. Then, a RANSAC based least squares approach is used for fitting a plane to the road. At the same time,
relative camera’s position and orientation are computed. The proposed technique is intended to be used on a driving assistance scheme for applications such as obstacle or pedestrian detection. Experimental results on urban environments with different road geometries are presented.
|
|
|
Angel Sappa, Fadi Dornaika, David Geronimo and Antonio Lopez. 2007. Efficient On-Board Stereo Vision Pose Estimation. Computer Aided Systems Theory, Selected paper from.1183–1190. (LNCS.)
Abstract: This paper presents an efficient technique for real time estimation of on-board stereo vision system pose. The whole process is performed in the Euclidean space and consists of two stages. Initially, a compact representation of the original 3D data points is computed. Then, a RANSAC based least squares approach is used for fitting a plane to the 3D road points. Fast RANSAC fitting is obtained by selecting points according to a probability distribution function that takes into account the density of points at a given depth. Finally, stereo camera position
and orientation—pose—is computed relative to the road plane. The proposed technique is intended to be used on driver assistance systems for applications such as obstacle or pedestrian detection. A real time performance is reached. Experimental results on several environments and comparisons with a previous work are presented.
|
|
|
Angel Sappa, Rosa Herrero, Fadi Dornaika, David Geronimo and Antonio Lopez. 2007. Road Approximation in Euclidean and v-Disparity Space: A Comparative Study. Computer Aided Systems Theory,.1105–1112. (LNCS.)
Abstract: This paper presents a comparative study between two road approximation techniques—planar surfaces—from stereo vision data. The first approach is carried out in the v-disparity space and is based on a voting scheme, the Hough transform. The second one consists in computing the best fitting plane for the whole 3D road data points, directly in the Euclidean space, by using least squares fitting. The comparative study is initially performed over a set of different synthetic surfaces
(e.g., plane, quadratic surface, cubic surface) digitized by a virtual stereo head; then real data obtained with a commercial stereo head are used. The comparative study is intended to be used as a criterion for fining the best technique according to the road geometry. Additionally, it highlights common problems driven from a wrong assumption about the scene’s prior knowledge.
|
|
|
Patricia Marquez, Debora Gil and Aura Hernandez-Sabate. 2011. A Confidence Measure for Assessing Optical Flow Accuracy in the Absence of Ground Truth. IEEE International Conference on Computer Vision – Workshops. Barcelona (Spain), IEEE, 2042–2049.
Abstract: Optical flow is a valuable tool for motion analysis in autonomous navigation systems. A reliable application requires determining the accuracy of the computed optical flow. This is a main challenge given the absence of ground truth in real world sequences. This paper introduces a measure of optical flow accuracy for Lucas-Kanade based flows in terms of the numerical stability of the data-term. We call this measure optical flow condition number. A statistical analysis over ground-truth data show a good statistical correlation between the condition number and optical flow error. Experiments on driving sequences illustrate its potential for autonomous navigation systems.
Keywords: IEEE International Conference on Computer Vision – Workshops
|
|
|
Patricia Marquez, Debora Gil and Aura Hernandez-Sabate. 2013. Evaluation of the Capabilities of Confidence Measures for Assessing Optical Flow Quality. ICCV Workshop on Computer Vision in Vehicle Technology: From Earth to Mars.624–631.
Abstract: Assessing Optical Flow (OF) quality is essential for its further use in reliable decision support systems. The absence of ground truth in such situations leads to the computation of OF Confidence Measures (CM) obtained from either input or output data. A fair comparison across the capabilities of the different CM for bounding OF error is required in order to choose the best OF-CM pair for discarding points where OF computation is not reliable. This paper presents a statistical probabilistic framework for assessing the quality of a given CM. Our quality measure is given in terms of the percentage of pixels whose OF error bound can not be determined by CM values. We also provide statistical tools for the computation of CM values that ensures a given accuracy of the flow field.
|
|
|
Jose Manuel Alvarez, Theo Gevers and Antonio Lopez. 2013. Evaluating Color Representation for Online Road Detection. ICCV Workshop on Computer Vision in Vehicle Technology: From Earth to Mars.594–595.
Abstract: Detecting traversable road areas ahead a moving vehicle is a key process for modern autonomous driving systems. Most existing algorithms use color to classify pixels as road or background. These algorithms reduce the effect of lighting variations and weather conditions by exploiting the discriminant/invariant properties of different color representations. However, up to date, no comparison between these representations have been conducted. Therefore, in this paper, we perform an evaluation of existing color representations for road detection. More specifically, we focus on color planes derived from RGB data and their most com-
mon combinations. The evaluation is done on a set of 7000 road images acquired
using an on-board camera in different real-driving situations.
|
|
|
Patricia Marquez, Debora Gil, Aura Hernandez-Sabate and Daniel Kondermann. 2013. When Is A Confidence Measure Good Enough? 9th International Conference on Computer Vision Systems. Springer Link, 344–353. (LNCS.)
Abstract: Confidence estimation has recently become a hot topic in image processing and computer vision.Yet, several definitions exist of the term “confidence” which are sometimes used interchangeably. This is a position paper, in which we aim to give an overview on existing definitions,
thereby clarifying the meaning of the used terms to facilitate further research in this field. Based on these clarifications, we develop a theory to compare confidence measures with respect to their quality.
Keywords: Optical flow, confidence measure, performance evaluation
|
|
|
German Ros, Jesus Martinez del Rincon and Gines Garcia-Mateos. 2012. Articulated Particle Filter for Hand Tracking. 21st International Conference on Pattern Recognition.3581–3585.
Abstract: This paper proposes a new version of Particle Filter, called Articulated Particle Filter – ArPF -, which has been specifically designed for an efficient sampling of hierarchical spaces, generated by articulated objects. Our approach decomposes the articulated motion into layers for efficiency purposes, making use of a careful modeling of the diffusion noise along with its propagation through the articulations. This produces an increase of accuracy and prevent for divergences. The algorithm is tested on hand tracking due to its complex hierarchical articulated nature. With this purpose, a new dataset generation tool for quantitative evaluation is also presented in this paper.
|
|
|
Jose Carlos Rubio, Joan Serrat, Antonio Lopez and N. Paragios. 2012. Image Contextual Representation and Matching through Hierarchies and Higher Order Graphs. 21st International Conference on Pattern Recognition.2664–2667.
Abstract: We present a region matching algorithm which establishes correspondences between regions from two segmented images. An abstract graph-based representation conceals the image in a hierarchical graph, exploiting the scene properties at two levels. First, the similarity and spatial consistency of the image semantic objects is encoded in a graph of commute times. Second, the cluttered regions of the semantic objects are represented with a shape descriptor. Many-to-many matching of regions is specially challenging due to the instability of the segmentation under slight image changes, and we explicitly handle it through high order potentials. We demonstrate the matching approach applied to images of world famous buildings, captured under different conditions, showing the robustness of our method to large variations in illumination and viewpoint.
|
|
|
Miguel Oliveira, L. Seabra Lopes, G. Hyun Lim, S. Hamidreza Kasaei, Angel Sappa and A. Tom. 2015. Concurrent Learning of Visual Codebooks and Object Categories in Openended Domains. International Conference on Intelligent Robots and Systems.2488–2495.
Abstract: In open-ended domains, robots must continuously learn new object categories. When the training sets are created offline, it is not possible to ensure their representativeness with respect to the object categories and features the system will find when operating online. In the Bag of Words model, visual codebooks are constructed from training sets created offline. This might lead to non-discriminative visual words and, as a consequence, to poor recognition performance. This paper proposes a visual object recognition system which concurrently learns in an incremental and online fashion both the visual object category representations as well as the codebook words used to encode them. The codebook is defined using Gaussian Mixture Models which are updated using new object views. The approach contains similarities with the human visual object recognition system: evidence suggests that the development of recognition capabilities occurs on multiple levels and is sustained over large periods of time. Results show that the proposed system with concurrent learning of object categories and codebooks is capable of learning more categories, requiring less examples, and with similar accuracies, when compared to the classical Bag of Words approach using offline constructed codebooks.
Keywords: Visual Learning; Computer Vision; Autonomous Agents
|
|