|
Antonio Lopez, & Joan Serrat. (1998). Ridges and Valleys in Image Analysis.
|
|
|
Antonio Lopez, Jiaolong Xu, Jose Luis Gomez, David Vazquez, & German Ros. (2017). From Virtual to Real World Visual Perception using Domain Adaptation -- The DPM as Example. In Gabriela Csurka (Ed.), Domain Adaptation in Computer Vision Applications (pp. 243–258). Springer.
Abstract: Supervised learning tends to produce more accurate classifiers than unsupervised learning in general. This implies that training data is preferred with annotations. When addressing visual perception challenges, such as localizing certain object classes within an image, the learning of the involved classifiers turns out to be a practical bottleneck. The reason is that, at least, we have to frame object examples with bounding boxes in thousands of images. A priori, the more complex the model is regarding its number of parameters, the more annotated examples are required. This annotation task is performed by human oracles, which ends up in inaccuracies and errors in the annotations (aka ground truth) since the task is inherently very cumbersome and sometimes ambiguous. As an alternative we have pioneered the use of virtual worlds for collecting such annotations automatically and with high precision. However, since the models learned with virtual data must operate in the real world, we still need to perform domain adaptation (DA). In this chapter we revisit the DA of a deformable part-based model (DPM) as an exemplifying case of virtual- to-real-world DA. As a use case, we address the challenge of vehicle detection for driver assistance, using different publicly available virtual-world data. While doing so, we investigate questions such as: how does the domain gap behave due to virtual-vs-real data with respect to dominant object appearance per domain, as well as the role of photo-realism in the virtual world.
Keywords: Domain Adaptation
|
|
|
Antonio Lopez, J. Hilgenstock, A. Busse, Ramon Baldrich, Felipe Lumbreras, & Joan Serrat. (2008). Nightime Vehicle Detecion for Intelligent Headlight Control. In Advanced Concepts for Intelligent Vision Systems, 10th International Conference, Proceedings, (Vol. 5259, 113–124). LNCS.
Keywords: Intelligent Headlights; vehicle detection
|
|
|
Antonio Lopez, J. Hilgenstock, A. Busse, Ramon Baldrich, Felipe Lumbreras, & Joan Serrat. (2008). Temporal Coherence Analysis for Intelligent Headlight Control.
Keywords: Intelligent Headlights
|
|
|
Antonio Lopez, Gabriel Villalonga, Laura Sellart, German Ros, David Vazquez, Jiaolong Xu, et al. (2017). Training my car to see using virtual worlds. IMAVIS - Image and Vision Computing, 38, 102–118.
Abstract: Computer vision technologies are at the core of different advanced driver assistance systems (ADAS) and will play a key role in oncoming autonomous vehicles too. One of the main challenges for such technologies is to perceive the driving environment, i.e. to detect and track relevant driving information in a reliable manner (e.g. pedestrians in the vehicle route, free space to drive through). Nowadays it is clear that machine learning techniques are essential for developing such a visual perception for driving. In particular, the standard working pipeline consists of collecting data (i.e. on-board images), manually annotating the data (e.g. drawing bounding boxes around pedestrians), learning a discriminative data representation taking advantage of such annotations (e.g. a deformable part-based model, a deep convolutional neural network), and then assessing the reliability of such representation with the acquired data. In the last two decades most of the research efforts focused on representation learning (first, designing descriptors and learning classifiers; later doing it end-to-end). Hence, collecting data and, especially, annotating it, is essential for learning good representations. While this has been the case from the very beginning, only after the disruptive appearance of deep convolutional neural networks that it became a serious issue due to their data hungry nature. In this context, the problem is that manual data annotation is a tiresome work prone to errors. Accordingly, in the late 00’s we initiated a research line consisting of training visual models using photo-realistic computer graphics, especially focusing on assisted and autonomous driving. In this paper, we summarize such a work and show how it has become a new tendency with increasing acceptance.
|
|
|
Antonio Lopez, Felipe Lumbreras, Joan Serrat, & Juan J. Villanueva. (1999). Evaluation of Methods for Ridge and Valley Detection.
|
|
|
Antonio Lopez, Felipe Lumbreras, & Joan Serrat. (1998). Creaseness form level set extrinsec curvature..
|
|
|
Antonio Lopez, Felipe Lumbreras, & Joan Serrat. (1997). Efficient computation of local creaseness. CVC, Bellaterra (Spain).
|
|
|
Antonio Lopez, Felipe Lumbreras, A. Martinez, Joan Serrat, Xavier Roca, X. Varona, et al. (1997). Aplicaciones de la vision por computador a la industria..
|
|
|
Antonio Lopez, Ernest Valveny, & Juan J. Villanueva. (2005). Real-time quality control of surgical material packaging by artificial vision. Assembly Automation, 25(3).
|
|
|
Antonio Lopez, David Vazquez, & Gabriel Villalonga. (2018). Data for Training Models, Domain Adaptation. In Intelligent Vehicles. Enabling Technologies and Future Developments (395–436).
Abstract: Simulation can enable several developments in the field of intelligent vehicles. This chapter is divided into three main subsections. The first one deals with driving simulators. The continuous improvement of hardware performance is a well-known fact that is allowing the development of more complex driving simulators. The immersion in the simulation scene is increased by high fidelity feedback to the driver. In the second subsection, traffic simulation is explained as well as how it can be used for intelligent transport systems. Finally, it is rather clear that sensor-based perception and action must be based on data-driven algorithms. Simulation could provide data to train and test algorithms that are afterwards implemented in vehicles. These tools are explained in the third subsection.
Keywords: Driving simulator; hardware; software; interface; traffic simulation; macroscopic simulation; microscopic simulation; virtual data; training data
|
|
|
Antonio Lopez, David Lloret, Joan Serrat, & Juan J. Villanueva. (2000). Multilocal Creaseness Based on the Level-Set Extrinsic Curvarture..
|
|
|
Antonio Lopez, David Lloret, & Joan Serrat. (1998). Creaseness measures for CT and MR image registration..
Abstract: Creases are a type of ridge/valley structures that can be characterized by local conditions. Therefore, creaseness refers to local ridgeness and valleyness. The curvature K of the level curves and the mean curvature kM of the level surfaces are good measures of creaseness for 2-d and 3-d images, respectively. However, the way they are computed gives rise to discontinuities, reducing their usefulness in many applications. We propose a new creaseness measure, based on these curvatures, that avoids the discontinuities. We demonstrate its usefulness in the registration of CT and MR brain volumes, from the same patient, by searching the maximum in the correlation of their creaseness responses (ridgeness from the CT and valleyness from the MR). Due to the high dimensionality of the space of transforms, the search is performed by a hierarchical approach combined with an optimization method at each level of the hierarchy
|
|
|
Antonio Lopez, Cristina Cañero, Joan Serrat, J. Saludes, Felipe Lumbreras, & T. Graf. (2005). Detection of lane markings based on ridgeness and RANSAC.
|
|
|
Antonio Lopez, Atsushi Imiya, Tomas Pajdla, & Jose Manuel Alvarez. (2017). Computer Vision in Vehicle Technology: Land, Sea & Air. John Wiley & Sons, Ltd.
Abstract: Summary This chapter examines different vision-based commercial solutions for real-live problems related to vehicles. It is worth mentioning the recent astonishing performance of deep convolutional neural networks (DCNNs) in difficult visual tasks such as image classification, object recognition/localization/detection, and semantic segmentation. In fact,
different DCNN architectures are already being explored for low-level tasks such as optical flow and disparity computation, and higher level ones such as place recognition.
|
|