|
Angel Sappa, Rosa Herrero, Fadi Dornaika, David Geronimo and Antonio Lopez. 2007. Road Approximation in Euclidean and v-Disparity Space: A Comparative Study. EUROCAST2007, Workshop on Cybercars and Intelligent Vehicles.368–369.
Abstract: This paper presents a comparative study between two road approximation techniques—planar surfaces—from stereo vision data. The first approach is carried out in the v-disparity space and is based on a voting scheme, the Hough transform. The second one consists in computing the best fitting plane for the whole 3D road data points, directly in the Euclidean space, by using least squares fitting. The comparative study is initially performed over a set of different synthetic surfaces
(e.g., plane, quadratic surface, cubic surface) digitized by a virtual stereo head; then real data obtained with a commercial stereo head are used. The comparative study is intended to be used as a criterion for fining the best technique according to the road geometry. Additionally, it highlights common problems driven from a wrong assumption about the scene’s prior knowledge.
|
|
|
Muhammad Anwer Rao, David Vazquez and Antonio Lopez. 2011. Opponent Colors for Human Detection. In J. Vitria, J.M. Sanches and M. Hernandez, eds. 5th Iberian Conference on Pattern Recognition and Image Analysis. Berlin Heidelberg, Springer, 363–370. (LNCS.)
Abstract: Human detection is a key component in fields such as advanced driving assistance and video surveillance. However, even detecting non-occluded standing humans remains a challenge of intensive research. Finding good features to build human models for further detection is probably one of the most important issues to face. Currently, shape, texture and motion features have deserve extensive attention in the literature. However, color-based features, which are important in other domains (e.g., image categorization), have received much less attention. In fact, the use of RGB color space has become a kind of choice by default. The focus has been put in developing first and second order features on top of RGB space (e.g., HOG and co-occurrence matrices, resp.). In this paper we evaluate the opponent colors (OPP) space as a biologically inspired alternative for human detection. In particular, by feeding OPP space in the baseline framework of Dalal et al. for human detection (based on RGB, HOG and linear SVM), we will obtain better detection performance than by using RGB space. This is a relevant result since, up to the best of our knowledge, OPP space has not been previously used for human detection. This suggests that in the future it could be worth to compute co-occurrence matrices, self-similarity features, etc., also on top of OPP space, i.e., as we have done with HOG in this paper.
Keywords: Pedestrian Detection; Color; Part Based Models
|
|
|
David Aldavert, Ricardo Toledo, Arnau Ramisa and Ramon Lopez de Mantaras. 2009. Efficient Object Pixel-Level Categorization using Bag of Features: Advances in Visual Computing. 5th International Symposium on Visual Computing. Springer Berlin Heidelberg, 44–55.
Abstract: In this paper we present a pixel-level object categorization method suitable to be applied under real-time constraints. Since pixels are categorized using a bag of features scheme, the major bottleneck of such an approach would be the feature pooling in local histograms of visual words. Therefore, we propose to bypass this time-consuming step and directly obtain the score from a linear Support Vector Machine classifier. This is achieved by creating an integral image of the components of the SVM which can readily obtain the classification score for any image sub-window with only 10 additions and 2 products, regardless of its size. Besides, we evaluated the performance of two efficient feature quantization methods: the Hierarchical K-Means and the Extremely Randomized Forest. All experiments have been done in the Graz02 database, showing comparable, or even better results to related work with a lower computational cost.
|
|
|
German Ros, Laura Sellart, Joanna Materzynska, David Vazquez and Antonio Lopez. 2016. The SYNTHIA Dataset: A Large Collection of Synthetic Images for Semantic Segmentation of Urban Scenes. 29th IEEE Conference on Computer Vision and Pattern Recognition.3234–3243.
Abstract: Vision-based semantic segmentation in urban scenarios is a key functionality for autonomous driving. The irruption of deep convolutional neural networks (DCNNs) allows to foresee obtaining reliable classifiers to perform such a visual task. However, DCNNs require to learn many parameters from raw images; thus, having a sufficient amount of diversified images with this class annotations is needed. These annotations are obtained by a human cumbersome labour specially challenging for semantic segmentation, since pixel-level annotations are required. In this paper, we propose to use a virtual world for automatically generating realistic synthetic images with pixel-level annotations. Then, we address the question of how useful can be such data for the task of semantic segmentation; in particular, when using a DCNN paradigm. In order to answer this question we have generated a synthetic diversified collection of urban images, named SynthCity, with automatically generated class annotations. We use SynthCity in combination with publicly available real-world urban images with manually provided annotations. Then, we conduct experiments on a DCNN setting that show how the inclusion of SynthCity in the training stage significantly improves the performance of the semantic segmentation task
Keywords: Domain Adaptation; Autonomous Driving; Virtual Data; Semantic Segmentation
|
|
|
Cristhian A. Aguilera-Carrasco, F. Aguilera, Angel Sappa, C. Aguilera and Ricardo Toledo. 2016. Learning cross-spectral similarity measures with deep convolutional neural networks. 29th IEEE Conference on Computer Vision and Pattern Recognition Worshops.
Abstract: The simultaneous use of images from different spectracan be helpful to improve the performance of many computer vision tasks. The core idea behind the usage of crossspectral approaches is to take advantage of the strengths of each spectral band providing a richer representation of a scene, which cannot be obtained with just images from one spectral band. In this work we tackle the cross-spectral image similarity problem by using Convolutional Neural Networks (CNNs). We explore three different CNN architectures to compare the similarity of cross-spectral image patches. Specifically, we train each network with images from the visible and the near-infrared spectrum, and then test the result with two public cross-spectral datasets. Experimental results show that CNN approaches outperform the current state-of-art on both cross-spectral datasets. Additionally, our experiments show that some CNN architectures are capable of generalizing between different crossspectral domains.
|
|
|
Craig Von Land, Ricardo Toledo and Juan J. Villanueva. 1996. Object Oriented Design of the DICOM standard. International Symposium on Cardiovascular Imaging..
|
|
|
Juan A. Carvajal Ayala, Dennis Romero and Angel Sappa. 2016. Fine-tuning based deep convolutional networks for lepidopterous genus recognition. 21st Ibero American Congress on Pattern Recognition.467–475. (LNCS.)
Abstract: This paper describes an image classification approach oriented to identify specimens of lepidopterous insects at Ecuadorian ecological reserves. This work seeks to contribute to studies in the area of biology about genus of butterflies and also to facilitate the registration of unrecognized specimens. The proposed approach is based on the fine-tuning of three widely used pre-trained Convolutional Neural Networks (CNNs). This strategy is intended to overcome the reduced number of labeled images. Experimental results with a dataset labeled by expert biologists is presented, reaching a recognition accuracy above 92%.
|
|
|
Arnau Ramisa, David Aldavert, Shrihari Vasudevan, Ricardo Toledo and Ramon Lopez de Mantaras. 2011. The IIIA30 MObile Robot Object Recognition Datset. 11th Portuguese Robotics Open.
Abstract: Object perception is a key feature in order to make mobile robots able to perform high-level tasks. However, research aimed at addressing the constraints and limitations encountered in a mobile robotics scenario, like low image resolution, motion blur or tight computational constraints, is still very scarce. In order to facilitate future research in this direction, in this work we present an object detection and recognition dataset acquired using a mobile robotic platform. As a baseline for the dataset, we evaluated the cascade of weak classifiers object detection method from Viola and Jones.
|
|
|
Patricia Marquez, Debora Gil, R.Mester and Aura Hernandez-Sabate. 2014. Local Analysis of Confidence Measures for Optical Flow Quality Evaluation. 9th International Conference on Computer Vision Theory and Applications.450–457.
Abstract: Optical Flow (OF) techniques facing the complexity of real sequences have been developed in the last years. Even using the most appropriate technique for our specific problem, at some points the output flow might fail to achieve the minimum error required for the system. Confidence measures computed from either input data or OF output should discard those points where OF is not accurate enough for its further use. It follows that evaluating the capabilities of a confidence measure for bounding OF error is as important as the definition
itself. In this paper we analyze different confidence measures and point out their advantages and limitations for their use in real world settings. We also explore the agreement with current tools for their evaluation of confidence measures performance.
Keywords: Optical Flow; Confidence Measure; Performance Evaluation.
|
|
|
P. Ricaurte, C. Chilan, Cristhian A. Aguilera-Carrasco, Boris X. Vintimilla and Angel Sappa. 2014. Performance Evaluation of Feature Point Descriptors in the Infrared Domain. 9th International Conference on Computer Vision Theory and Applications.545–550.
Abstract: This paper presents a comparative evaluation of classical feature point descriptors when they are used in the long-wave infrared spectral band. Robustness to changes in rotation, scaling, blur, and additive noise are evaluated using a state of the art framework. Statistical results using an outdoor image data set are presented together with a discussion about the differences with respect to the results obtained when images from the visible spectrum are considered.
Keywords: Infrared Imaging; Feature Point Descriptors
|
|