|
Records |
Links |
|
Author |
Jose Luis Gomez; Gabriel Villalonga; Antonio Lopez |


|
|
Title |
Co-Training for Deep Object Detection: Comparing Single-Modal and Multi-Modal Approaches |
Type |
Journal Article |
|
Year |
2021 |
Publication |
Sensors |
Abbreviated Journal |
SENS |
|
|
Volume |
21 |
Issue |
9 |
Pages |
3185 |
|
|
Keywords |
co-training; multi-modality; vision-based object detection; ADAS; self-driving |
|
|
Abstract |
Top-performing computer vision models are powered by convolutional neural networks (CNNs). Training an accurate CNN highly depends on both the raw sensor data and their associated ground truth (GT). Collecting such GT is usually done through human labeling, which is time-consuming and does not scale as we wish. This data-labeling bottleneck may be intensified due to domain shifts among image sensors, which could force per-sensor data labeling. In this paper, we focus on the use of co-training, a semi-supervised learning (SSL) method, for obtaining self-labeled object bounding boxes (BBs), i.e., the GT to train deep object detectors. In particular, we assess the goodness of multi-modal co-training by relying on two different views of an image, namely, appearance (RGB) and estimated depth (D). Moreover, we compare appearance-based single-modal co-training with multi-modal. Our results suggest that in a standard SSL setting (no domain shift, a few human-labeled data) and under virtual-to-real domain shift (many virtual-world labeled data, no human-labeled data) multi-modal co-training outperforms single-modal. In the latter case, by performing GAN-based domain translation both co-training modalities are on par, at least when using an off-the-shelf depth estimation model not specifically trained on the translated images. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes  |
ADAS; 600.118 |
Approved |
no |
|
|
Call Number |
Admin @ si @ GVL2021 |
Serial |
3562 |
|
Permanent link to this record |
|
|
|
|
Author |
Hannes Mueller; Andre Groeger; Jonathan Hersh; Andrea Matranga; Joan Serrat |


|
|
Title |
Monitoring war destruction from space using machine learning |
Type |
Journal Article |
|
Year |
2021 |
Publication |
Proceedings of the National Academy of Sciences of the United States of America |
Abbreviated Journal |
PNAS |
|
|
Volume |
118 |
Issue |
23 |
Pages |
e2025400118 |
|
|
Keywords |
|
|
|
Abstract |
Existing data on building destruction in conflict zones rely on eyewitness reports or manual detection, which makes it generally scarce, incomplete, and potentially biased. This lack of reliable data imposes severe limitations for media reporting, humanitarian relief efforts, human-rights monitoring, reconstruction initiatives, and academic studies of violent conflict. This article introduces an automated method of measuring destruction in high-resolution satellite images using deep-learning techniques combined with label augmentation and spatial and temporal smoothing, which exploit the underlying spatial and temporal structure of destruction. As a proof of concept, we apply this method to the Syrian civil war and reconstruct the evolution of damage in major cities across the country. Our approach allows generating destruction data with unprecedented scope, resolution, and frequency—and makes use of the ever-higher frequency at which satellite imagery becomes available. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes  |
ADAS; 600.118 |
Approved |
no |
|
|
Call Number |
Admin @ si @ MGH2021 |
Serial |
3584 |
|
Permanent link to this record |
|
|
|
|
Author |
Cristhian A. Aguilera-Carrasco; Angel Sappa; Cristhian Aguilera; Ricardo Toledo |


|
|
Title |
Cross-Spectral Local Descriptors via Quadruplet Network |
Type |
Journal Article |
|
Year |
2017 |
Publication |
Sensors |
Abbreviated Journal |
SENS |
|
|
Volume |
17 |
Issue |
4 |
Pages |
873 |
|
|
Keywords |
|
|
|
Abstract |
This paper presents a novel CNN-based architecture, referred to as Q-Net, to learn local feature descriptors that are useful for matching image patches from two different spectral bands. Given correctly matched and non-matching cross-spectral image pairs, a quadruplet network is trained to map input image patches to a common Euclidean space, regardless of the input spectral band. Our approach is inspired by the recent success of triplet networks in the visible spectrum, but adapted for cross-spectral scenarios, where, for each matching pair, there are always two possible non-matching patches: one for each spectrum. Experimental evaluations on a public cross-spectral VIS-NIR dataset shows that the proposed approach improves the state-of-the-art. Moreover, the proposed technique can also be used in mono-spectral settings, obtaining a similar performance to triplet network descriptors, but requiring less training data. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes  |
ADAS; 600.086; 600.118 |
Approved |
no |
|
|
Call Number |
Admin @ si @ ASA2017 |
Serial |
2914 |
|
Permanent link to this record |
|
|
|
|
Author |
Angel Sappa; P. Carvajal; Cristhian A. Aguilera-Carrasco; Miguel Oliveira; Dennis Romero; Boris X. Vintimilla |


|
|
Title |
Wavelet based visible and infrared image fusion: a comparative study |
Type |
Journal Article |
|
Year |
2016 |
Publication |
Sensors |
Abbreviated Journal |
SENS |
|
|
Volume |
16 |
Issue |
6 |
Pages |
1-15 |
|
|
Keywords |
Image fusion; fusion evaluation metrics; visible and infrared imaging; discrete wavelet transform |
|
|
Abstract |
This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and Long Wave InfraRed (LWIR). |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes  |
ADAS; 600.086; 600.076 |
Approved |
no |
|
|
Call Number |
Admin @ si @SCA2016 |
Serial |
2807 |
|
Permanent link to this record |
|
|
|
|
Author |
Miguel Oliveira; Victor Santos; Angel Sappa; P. Dias; A. Moreira |


|
|
Title |
Incremental Scenario Representations for Autonomous Driving using Geometric Polygonal Primitives |
Type |
Journal Article |
|
Year |
2016 |
Publication |
Robotics and Autonomous Systems |
Abbreviated Journal |
RAS |
|
|
Volume |
83 |
Issue |
|
Pages |
312-325 |
|
|
Keywords |
Incremental scene reconstruction; Point clouds; Autonomous vehicles; Polygonal primitives |
|
|
Abstract |
When an autonomous vehicle is traveling through some scenario it receives a continuous stream of sensor data. This sensor data arrives in an asynchronous fashion and often contains overlapping or redundant information. Thus, it is not trivial how a representation of the environment observed by the vehicle can be created and updated over time. This paper presents a novel methodology to compute an incremental 3D representation of a scenario from 3D range measurements. We propose to use macro scale polygonal primitives to model the scenario. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Furthermore, we propose mechanisms designed to update the geometric polygonal primitives over time whenever fresh sensor data is collected. Results show that the approach is capable of producing accurate descriptions of the scene, and that it is computationally very efficient when compared to other reconstruction techniques. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Elsevier B.V. |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes  |
ADAS; 600.086, 600.076 |
Approved |
no |
|
|
Call Number |
Admin @ si @OSS2016a |
Serial |
2806 |
|
Permanent link to this record |
|
|
|
|
Author |
Miguel Oliveira; Victor Santos; Angel Sappa; P. Dias; A. Moreira |


|
|
Title |
Incremental texture mapping for autonomous driving |
Type |
Journal Article |
|
Year |
2016 |
Publication |
Robotics and Autonomous Systems |
Abbreviated Journal |
RAS |
|
|
Volume |
84 |
Issue |
|
Pages |
113-128 |
|
|
Keywords |
Scene reconstruction; Autonomous driving; Texture mapping |
|
|
Abstract |
Autonomous vehicles have a large number of on-board sensors, not only for providing coverage all around the vehicle, but also to ensure multi-modality in the observation of the scene. Because of this, it is not trivial to come up with a single, unique representation that feeds from the data given by all these sensors. We propose an algorithm which is capable of mapping texture collected from vision based sensors onto a geometric description of the scenario constructed from data provided by 3D sensors. The algorithm uses a constrained Delaunay triangulation to produce a mesh which is updated using a specially devised sequence of operations. These enforce a partial configuration of the mesh that avoids bad quality textures and ensures that there are no gaps in the texture. Results show that this algorithm is capable of producing fine quality textures. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes  |
ADAS; 600.086 |
Approved |
no |
|
|
Call Number |
Admin @ si @ OSS2016b |
Serial |
2912 |
|
Permanent link to this record |
|
|
|
|
Author |
Alejandro Gonzalez Alzate; David Vazquez; Antonio Lopez; Jaume Amores |


|
|
Title |
On-Board Object Detection: Multicue, Multimodal, and Multiview Random Forest of Local Experts |
Type |
Journal Article |
|
Year |
2017 |
Publication |
IEEE Transactions on cybernetics |
Abbreviated Journal |
Cyber |
|
|
Volume |
47 |
Issue |
11 |
Pages |
3980 - 3990 |
|
|
Keywords |
Multicue; multimodal; multiview; object detection |
|
|
Abstract |
Despite recent significant advances, object detection continues to be an extremely challenging problem in real scenarios. In order to develop a detector that successfully operates under these conditions, it becomes critical to leverage upon multiple cues, multiple imaging modalities, and a strong multiview (MV) classifier that accounts for different object views and poses. In this paper, we provide an extensive evaluation that gives insight into how each of these aspects (multicue, multimodality, and strong MV classifier) affect accuracy both individually and when integrated together. In the multimodality component, we explore the fusion of RGB and depth maps obtained by high-definition light detection and ranging, a type of modality that is starting to receive increasing attention. As our analysis reveals, although all the aforementioned aspects significantly help in improving the accuracy, the fusion of visible spectrum and depth information allows to boost the accuracy by a much larger margin. The resulting detector not only ranks among the top best performers in the challenging KITTI benchmark, but it is built upon very simple blocks that are easy to implement and computationally efficient. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
2168-2267 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes  |
ADAS; 600.085; 600.082; 600.076; 600.118 |
Approved |
no |
|
|
Call Number |
Admin @ si @ |
Serial |
2810 |
|
Permanent link to this record |
|
|
|
|
Author |
Jiaolong Xu; Sebastian Ramos; David Vazquez; Antonio Lopez |


|
|
Title |
Hierarchical Adaptive Structural SVM for Domain Adaptation |
Type |
Journal Article |
|
Year |
2016 |
Publication |
International Journal of Computer Vision |
Abbreviated Journal |
IJCV |
|
|
Volume |
119 |
Issue |
2 |
Pages |
159-178 |
|
|
Keywords |
Domain Adaptation; Pedestrian Detection |
|
|
Abstract |
A key topic in classification is the accuracy loss produced when the data distribution in the training (source) domain differs from that in the testing (target) domain. This is being recognized as a very relevant problem for many
computer vision tasks such as image classification, object detection, and object category recognition. In this paper, we present a novel domain adaptation method that leverages multiple target domains (or sub-domains) in a hierarchical adaptation tree. The core idea is to exploit the commonalities and differences of the jointly considered target domains.
Given the relevance of structural SVM (SSVM) classifiers, we apply our idea to the adaptive SSVM (A-SSVM), which only requires the target domain samples together with the existing source-domain classifier for performing the desired adaptation. Altogether, we term our proposal as hierarchical A-SSVM (HA-SSVM).
As proof of concept we use HA-SSVM for pedestrian detection, object category recognition and face recognition. In the former we apply HA-SSVM to the deformable partbased model (DPM) while in the rest HA-SSVM is applied to multi-category classifiers. We will show how HA-SSVM is effective in increasing the detection/recognition accuracy with respect to adaptation strategies that ignore the structure of the target data. Since, the sub-domains of the target data are not always known a priori, we shown how HA-SSVM can incorporate sub-domain discovery for object category recognition. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer US |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0920-5691 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes  |
ADAS; 600.085; 600.082; 600.076 |
Approved |
no |
|
|
Call Number |
Admin @ si @ XRV2016 |
Serial |
2669 |
|
Permanent link to this record |
|
|
|
|
Author |
Katerine Diaz; Aura Hernandez-Sabate; Antonio Lopez |


|
|
Title |
A reduced feature set for driver head pose estimation |
Type |
Journal Article |
|
Year |
2016 |
Publication |
Applied Soft Computing |
Abbreviated Journal |
ASOC |
|
|
Volume |
45 |
Issue |
|
Pages |
98-107 |
|
|
Keywords |
Head pose estimation; driving performance evaluation; subspace based methods; linear regression |
|
|
Abstract |
Evaluation of driving performance is of utmost importance in order to reduce road accident rate. Since driving ability includes visual-spatial and operational attention, among others, head pose estimation of the driver is a crucial indicator of driving performance. This paper proposes a new automatic method for coarse and fine head's yaw angle estimation of the driver. We rely on a set of geometric features computed from just three representative facial keypoints, namely the center of the eyes and the nose tip. With these geometric features, our method combines two manifold embedding methods and a linear regression one. In addition, the method has a confidence mechanism to decide if the classification of a sample is not reliable. The approach has been tested using the CMU-PIE dataset and our own driver dataset. Despite the very few facial keypoints required, the results are comparable to the state-of-the-art techniques. The low computational cost of the method and its robustness makes feasible to integrate it in massive consume devices as a real time application. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes  |
ADAS; 600.085; 600.076;;IAM |
Approved |
no |
|
|
Call Number |
Admin @ si @ DHL2016 |
Serial |
2760 |
|
Permanent link to this record |
|
|
|
|
Author |
Zhijie Fang; David Vazquez; Antonio Lopez |


|
|
Title |
On-Board Detection of Pedestrian Intentions |
Type |
Journal Article |
|
Year |
2017 |
Publication |
Sensors |
Abbreviated Journal |
SENS |
|
|
Volume |
17 |
Issue |
10 |
Pages |
2193 |
|
|
Keywords |
pedestrian intention; ADAS; self-driving |
|
|
Abstract |
Avoiding vehicle-to-pedestrian crashes is a critical requirement for nowadays advanced driver assistant systems (ADAS) and future self-driving vehicles. Accordingly, detecting pedestrians from raw sensor data has a history of more than 15 years of research, with vision playing a central role.
During the last years, deep learning has boosted the accuracy of image-based pedestrian detectors.
However, detection is just the first step towards answering the core question, namely is the vehicle going to crash with a pedestrian provided preventive actions are not taken? Therefore, knowing as soon as possible if a detected pedestrian has the intention of crossing the road ahead of the vehicle is
essential for performing safe and comfortable maneuvers that prevent a crash. However, compared to pedestrian detection, there is relatively little literature on detecting pedestrian intentions. This paper aims to contribute along this line by presenting a new vision-based approach which analyzes the
pose of a pedestrian along several frames to determine if he or she is going to enter the road or not. We present experiments showing 750 ms of anticipation for pedestrians crossing the road, which at a typical urban driving speed of 50 km/h can provide 15 additional meters (compared to a pure pedestrian detector) for vehicle automatic reactions or to warn the driver. Moreover, in contrast with state-of-the-art methods, our approach is monocular, neither requiring stereo nor optical flow information. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes  |
ADAS; 600.085; 600.076; 601.223; 600.116; 600.118 |
Approved |
no |
|
|
Call Number |
Admin @ si @ FVL2017 |
Serial |
2983 |
|
Permanent link to this record |