|
Javier Marin, David Vazquez, David Geronimo and Antonio Lopez. 2010. Learning Appearance in Virtual Scenarios for Pedestrian Detection. 23rd IEEE Conference on Computer Vision and Pattern Recognition.137–144.
Abstract: Detecting pedestrians in images is a key functionality to avoid vehicle-to-pedestrian collisions. The most promising detectors rely on appearance-based pedestrian classifiers trained with labelled samples. This paper addresses the following question: can a pedestrian appearance model learnt in virtual scenarios work successfully for pedestrian detection in real images? (Fig. 1). Our experiments suggest a positive answer, which is a new and relevant conclusion for research in pedestrian detection. More specifically, we record training sequences in virtual scenarios and then appearance-based pedestrian classifiers are learnt using HOG and linear SVM. We test such classifiers in a publicly available dataset provided by Daimler AG for pedestrian detection benchmarking. This dataset contains real world images acquired from a moving car. The obtained result is compared with the one given by a classifier learnt using samples coming from real images. The comparison reveals that, although virtual samples were not specially selected, both virtual and real based training give rise to classifiers of similar performance.
Keywords: Pedestrian Detection; Domain Adaptation
|
|
|
Jose Manuel Alvarez, Theo Gevers and Antonio Lopez. 2013. Evaluating Color Representation for Online Road Detection. ICCV Workshop on Computer Vision in Vehicle Technology: From Earth to Mars.594–595.
Abstract: Detecting traversable road areas ahead a moving vehicle is a key process for modern autonomous driving systems. Most existing algorithms use color to classify pixels as road or background. These algorithms reduce the effect of lighting variations and weather conditions by exploiting the discriminant/invariant properties of different color representations. However, up to date, no comparison between these representations have been conducted. Therefore, in this paper, we perform an evaluation of existing color representations for road detection. More specifically, we focus on color planes derived from RGB data and their most com-
mon combinations. The evaluation is done on a set of 7000 road images acquired
using an on-board camera in different real-driving situations.
|
|
|
Idoia Ruiz and Joan Serrat. 2020. Rank-based ordinal classification. 25th International Conference on Pattern Recognition.8069–8076.
Abstract: Differently from the regular classification task, in ordinal classification there is an order in the classes. As a consequence not all classification errors matter the same: a predicted class close to the groundtruth one is better than predicting a farther away class. To account for this, most previous works employ loss functions based on the absolute difference between the predicted and groundtruth class labels. We argue that there are many cases in ordinal classification where label values are arbitrary (for instance 1. . . C, being C the number of classes) and thus such loss functions may not be the best choice. We instead propose a network architecture that produces not a single class prediction but an ordered vector, or ranking, of all the possible classes from most to least likely. This is thanks to a loss function that compares groundtruth and predicted rankings of these class labels, not the labels themselves. Another advantage of this new formulation is that we can enforce consistency in the predictions, namely, predicted rankings come from some unimodal vector of scores with mode at the groundtruth class. We compare with the state of the art ordinal classification methods, showing
that ours attains equal or better performance, as measured by common ordinal classification metrics, on three benchmark datasets. Furthermore, it is also suitable for a new task on image aesthetics assessment, i.e. most voted score prediction. Finally, we also apply it to building damage assessment from satellite images, providing an analysis of its performance depending on the degree of imbalance of the dataset.
|
|
|
Jiaolong Xu, Sebastian Ramos, David Vazquez and Antonio Lopez. 2014. Cost-sensitive Structured SVM for Multi-category Domain Adaptation. 22nd International Conference on Pattern Recognition. IEEE, 3886–3891.
Abstract: Domain adaptation addresses the problem of accuracy drop that a classifier may suffer when the training data (source domain) and the testing data (target domain) are drawn from different distributions. In this work, we focus on domain adaptation for structured SVM (SSVM). We propose a cost-sensitive domain adaptation method for SSVM, namely COSS-SSVM. In particular, during the re-training of an adapted classifier based on target and source data, the idea that we explore consists in introducing a non-zero cost even for correctly classified source domain samples. Eventually, we aim to learn a more targetoriented classifier by not rewarding (zero loss) properly classified source-domain training samples. We assess the effectiveness of COSS-SSVM on multi-category object recognition.
Keywords: Domain Adaptation; Pedestrian Detection
|
|
|
Naveen Onkarappa and Angel Sappa. 2012. An Empirical Study on Optical Flow Accuracy Depending on Vehicle Speed. IEEE Intelligent Vehicles Symposium. IEEE Xplore, 1138–1143.
Abstract: Driver assistance and safety systems are getting attention nowadays towards automatic navigation and safety. Optical flow as a motion estimation technique has got major roll in making these systems a reality. Towards this, in the current paper, the suitability of polar representation for optical flow estimation in such systems is demonstrated. Furthermore, the influence of individual regularization terms on the accuracy of optical flow on image sequences of different speeds is empirically evaluated. Also a new synthetic dataset of image sequences with different speeds is generated along with the ground-truth optical flow.
|
|
|
Felipe Codevilla, Eder Santana, Antonio Lopez and Adrien Gaidon. 2019. Exploring the Limitations of Behavior Cloning for Autonomous Driving. 18th IEEE International Conference on Computer Vision.9328–9337.
Abstract: Driving requires reacting to a wide variety of complex environment conditions and agent behaviors. Explicitly modeling each possible scenario is unrealistic. In contrast, imitation learning can, in theory, leverage data from large fleets of human-driven cars. Behavior cloning in particular has been successfully used to learn simple visuomotor policies end-to-end, but scaling to the full spectrum of driving behaviors remains an unsolved problem. In this paper, we propose a new benchmark to experimentally investigate the scalability and limitations of behavior cloning. We show that behavior cloning leads to state-of-the-art results, executing complex lateral and longitudinal maneuvers, even in unseen environments, without being explicitly programmed to do so. However, we confirm some limitations of the behavior cloning approach: some well-known limitations (eg, dataset bias and overfitting), new generalization issues (eg, dynamic objects and the lack of a causal modeling), and training instabilities, all requiring further research before behavior cloning can graduate to real-world driving. The code, dataset, benchmark, and agent studied in this paper can be found at github.
|
|
|
Aura Hernandez-Sabate, Debora Gil, David Roche, Monica M. S. Matsumoto and Sergio S. Furuie. 2011. Inferring the Performance of Medical Imaging Algorithms. In Pedro Real, Daniel Diaz-Pernil, Helena Molina-Abril, Ainhoa Berciano and Walter Kropatsch, eds. 14th International Conference on Computer Analysis of Images and Patterns. Berlin, Springer-Verlag Berlin Heidelberg, 520–528. (LNCS.)
Abstract: Evaluation of the performance and limitations of medical imaging algorithms is essential to estimate their impact in social, economic or clinical aspects. However, validation of medical imaging techniques is a challenging task due to the variety of imaging and clinical problems involved, as well as, the difficulties for systematically extracting a reliable solely ground truth. Although specific validation protocols are reported in any medical imaging paper, there are still two major concerns: definition of standardized methodologies transversal to all problems and generalization of conclusions to the whole clinical data set.
We claim that both issues would be fully solved if we had a statistical model relating ground truth and the output of computational imaging techniques. Such a statistical model could conclude to what extent the algorithm behaves like the ground truth from the analysis of a sampling of the validation data set. We present a statistical inference framework reporting the agreement and describing the relationship of two quantities. We show its transversality by applying it to validation of two different tasks: contour segmentation and landmark correspondence.
Keywords: Validation, Statistical Inference, Medical Imaging Algorithms.
|
|
|
Naveen Onkarappa and Angel Sappa. 2013. Laplacian Derivative based Regularization for Optical Flow Estimation in Driving Scenario. 15th International Conference on Computer Analysis of Images and Patterns. Springer Berlin Heidelberg, 483–490. (LNCS.)
Abstract: Existing state of the art optical flow approaches, which are evaluated on standard datasets such as Middlebury, not necessarily have a similar performance when evaluated on driving scenarios. This drop on performance is due to several challenges arising on real scenarios during driving. Towards this direction, in this paper, we propose a modification to the regularization term in a variational optical flow formulation, that notably improves the results, specially in driving scenarios. The proposed modification consists on using the Laplacian derivatives of flow components in the regularization term instead of gradients of flow components. We show the improvements in results on a standard real image sequences dataset (KITTI).
Keywords: Optical flow; regularization; Driver Assistance Systems; Performance Evaluation
|
|
|
Arnau Ramisa, Shrihari Vasudevan, David Aldavert, Ricardo Toledo and Ramon Lopez de Mantaras. 2009. Evaluation of the SIFT Object Recognition Method in Mobile Robots: Frontiers in Artificial Intelligence and Applications. 12th International Conference of the Catalan Association for Artificial Intelligence.9–18.
Abstract: General object recognition in mobile robots is of primary importance in order to enhance the representation of the environment that robots will use for their reasoning processes. Therefore, we contribute reduce this gap by evaluating the SIFT Object Recognition method in a challenging dataset, focusing on issues relevant to mobile robotics. Resistance of the method to the robotics working conditions was found, but it was limited mainly to well-textured objects.
|
|
|
Guim Perarnau, Joost Van de Weijer, Bogdan Raducanu and Jose Manuel Alvarez. 2016. Invertible conditional gans for image editing. 30th Annual Conference on Neural Information Processing Systems Worshops.
Abstract: Generative Adversarial Networks (GANs) have recently demonstrated to successfully approximate complex data distributions. A relevant extension of this model is conditional GANs (cGANs), where the introduction of external information allows to determine specific representations of the generated images. In this work, we evaluate encoders to inverse the mapping of a cGAN, i.e., mapping a real image into a latent space and a conditional representation. This allows, for example, to reconstruct and modify real images of faces conditioning on arbitrary attributes.
Additionally, we evaluate the design of cGANs. The combination of an encoder
with a cGAN, which we call Invertible cGAN (IcGAN), enables to re-generate real
images with deterministic complex modifications.
|
|