|
Records |
Links |
|
Author |
Daniel Hernandez; Antonio Espinosa; David Vazquez; Antonio Lopez; Juan C. Moure |


|
|
Title |
3D Perception With Slanted Stixels on GPU |
Type  |
Journal Article |
|
Year |
2021 |
Publication |
IEEE Transactions on Parallel and Distributed Systems |
Abbreviated Journal |
TPDS |
|
|
Volume |
32 |
Issue |
10 |
Pages |
2434-2447 |
|
|
Keywords |
Daniel Hernandez-Juarez; Antonio Espinosa; David Vazquez; Antonio M. Lopez; Juan C. Moure |
|
|
Abstract |
This article presents a GPU-accelerated software design of the recently proposed model of Slanted Stixels, which represents the geometric and semantic information of a scene in a compact and accurate way. We reformulate the measurement depth model to reduce the computational complexity of the algorithm, relying on the confidence of the depth estimation and the identification of invalid values to handle outliers. The proposed massively parallel scheme and data layout for the irregular computation pattern that corresponds to a Dynamic Programming paradigm is described and carefully analyzed in performance terms. Performance is shown to scale gracefully on current generation embedded GPUs. We assess the proposed methods in terms of semantic and geometric accuracy as well as run-time performance on three publicly available benchmark datasets. Our approach achieves real-time performance with high accuracy for 2048 × 1024 image sizes and 4 × 4 Stixel resolution on the low-power embedded GPU of an NVIDIA Tegra Xavier. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS; 600.124; 600.118 |
Approved |
no |
|
|
Call Number |
Admin @ si @ HEV2021 |
Serial |
3561 |
|
Permanent link to this record |
|
|
|
|
Author |
Jose L. Gomez; Gabriel Villalonga; Antonio Lopez |


|
|
Title |
Co-Training for Deep Object Detection: Comparing Single-Modal and Multi-Modal Approaches |
Type  |
Journal Article |
|
Year |
2021 |
Publication |
Sensors |
Abbreviated Journal |
SENS |
|
|
Volume |
21 |
Issue |
9 |
Pages |
3185 |
|
|
Keywords |
co-training; multi-modality; vision-based object detection; ADAS; self-driving |
|
|
Abstract |
Top-performing computer vision models are powered by convolutional neural networks (CNNs). Training an accurate CNN highly depends on both the raw sensor data and their associated ground truth (GT). Collecting such GT is usually done through human labeling, which is time-consuming and does not scale as we wish. This data-labeling bottleneck may be intensified due to domain shifts among image sensors, which could force per-sensor data labeling. In this paper, we focus on the use of co-training, a semi-supervised learning (SSL) method, for obtaining self-labeled object bounding boxes (BBs), i.e., the GT to train deep object detectors. In particular, we assess the goodness of multi-modal co-training by relying on two different views of an image, namely, appearance (RGB) and estimated depth (D). Moreover, we compare appearance-based single-modal co-training with multi-modal. Our results suggest that in a standard SSL setting (no domain shift, a few human-labeled data) and under virtual-to-real domain shift (many virtual-world labeled data, no human-labeled data) multi-modal co-training outperforms single-modal. In the latter case, by performing GAN-based domain translation both co-training modalities are on par, at least when using an off-the-shelf depth estimation model not specifically trained on the translated images. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS; 600.118 |
Approved |
no |
|
|
Call Number |
Admin @ si @ GVL2021 |
Serial |
3562 |
|
Permanent link to this record |
|
|
|
|
Author |
Hannes Mueller; Andre Groeger; Jonathan Hersh; Andrea Matranga; Joan Serrat |


|
|
Title |
Monitoring war destruction from space using machine learning |
Type  |
Journal Article |
|
Year |
2021 |
Publication |
Proceedings of the National Academy of Sciences of the United States of America |
Abbreviated Journal |
PNAS |
|
|
Volume |
118 |
Issue |
23 |
Pages |
e2025400118 |
|
|
Keywords |
|
|
|
Abstract |
Existing data on building destruction in conflict zones rely on eyewitness reports or manual detection, which makes it generally scarce, incomplete, and potentially biased. This lack of reliable data imposes severe limitations for media reporting, humanitarian relief efforts, human-rights monitoring, reconstruction initiatives, and academic studies of violent conflict. This article introduces an automated method of measuring destruction in high-resolution satellite images using deep-learning techniques combined with label augmentation and spatial and temporal smoothing, which exploit the underlying spatial and temporal structure of destruction. As a proof of concept, we apply this method to the Syrian civil war and reconstruct the evolution of damage in major cities across the country. Our approach allows generating destruction data with unprecedented scope, resolution, and frequency—and makes use of the ever-higher frequency at which satellite imagery becomes available. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS; 600.118 |
Approved |
no |
|
|
Call Number |
Admin @ si @ MGH2021 |
Serial |
3584 |
|
Permanent link to this record |
|
|
|
|
Author |
Idoia Ruiz; Joan Serrat |

|
|
Title |
Hierarchical Novelty Detection for Traffic Sign Recognition |
Type  |
Journal Article |
|
Year |
2022 |
Publication |
Sensors |
Abbreviated Journal |
SENS |
|
|
Volume |
22 |
Issue |
12 |
Pages |
4389 |
|
|
Keywords |
Novelty detection; hierarchical classification; deep learning; traffic sign recognition; autonomous driving; computer vision |
|
|
Abstract |
Recent works have made significant progress in novelty detection, i.e., the problem of detecting samples of novel classes, never seen during training, while classifying those that belong to known classes. However, the only information this task provides about novel samples is that they are unknown. In this work, we leverage hierarchical taxonomies of classes to provide informative outputs for samples of novel classes. We predict their closest class in the taxonomy, i.e., its parent class. We address this problem, known as hierarchical novelty detection, by proposing a novel loss, namely Hierarchical Cosine Loss that is designed to learn class prototypes along with an embedding of discriminative features consistent with the taxonomy. We apply it to traffic sign recognition, where we predict the parent class semantics for new types of traffic signs. Our model beats state-of-the art approaches on two large scale traffic sign benchmarks, Mapillary Traffic Sign Dataset (MTSD) and Tsinghua-Tencent 100K (TT100K), and performs similarly on natural images benchmarks (AWA2, CUB). For TT100K and MTSD, our approach is able to detect novel samples at the correct nodes of the hierarchy with 81% and 36% of accuracy, respectively, at 80% known class accuracy. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS; 600.154 |
Approved |
no |
|
|
Call Number |
Admin @ si @ RuS2022 |
Serial |
3684 |
|
Permanent link to this record |
|
|
|
|
Author |
Jose Luis Gomez; Gabriel Villalonga; Antonio Lopez |

|
|
Title |
Co-Training for Unsupervised Domain Adaptation of Semantic Segmentation Models |
Type  |
Journal Article |
|
Year |
2023 |
Publication |
Sensors – Special Issue on “Machine Learning for Autonomous Driving Perception and Prediction” |
Abbreviated Journal |
SENS |
|
|
Volume |
23 |
Issue |
2 |
Pages |
621 |
|
|
Keywords |
Domain adaptation; semi-supervised learning; Semantic segmentation; Autonomous driving |
|
|
Abstract |
Semantic image segmentation is a central and challenging task in autonomous driving, addressed by training deep models. Since this training draws to a curse of human-based image labeling, using synthetic images with automatically generated labels together with unlabeled real-world images is a promising alternative. This implies to address an unsupervised domain adaptation (UDA) problem. In this paper, we propose a new co-training procedure for synth-to-real UDA of semantic
segmentation models. It consists of a self-training stage, which provides two domain-adapted models, and a model collaboration loop for the mutual improvement of these two models. These models are then used to provide the final semantic segmentation labels (pseudo-labels) for the real-world images. The overall
procedure treats the deep models as black boxes and drives their collaboration at the level of pseudo-labeled target images, i.e., neither modifying loss functions is required, nor explicit feature alignment. We test our proposal on standard synthetic and real-world datasets for on-board semantic segmentation. Our
procedure shows improvements ranging from ∼13 to ∼26 mIoU points over baselines, so establishing new state-of-the-art results. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS; no proj |
Approved |
no |
|
|
Call Number |
Admin @ si @ GVL2023 |
Serial |
3705 |
|
Permanent link to this record |
|
|
|
|
Author |
Aura Hernandez-Sabate; Jose Elias Yauri; Pau Folch; Miquel Angel Piera; Debora Gil |

|
|
Title |
Recognition of the Mental Workloads of Pilots in the Cockpit Using EEG Signals |
Type  |
Journal Article |
|
Year |
2022 |
Publication |
Applied Sciences |
Abbreviated Journal |
APPLSCI |
|
|
Volume |
12 |
Issue |
5 |
Pages |
2298 |
|
|
Keywords |
Cognitive states; Mental workload; EEG analysis; Neural networks; Multimodal data fusion |
|
|
Abstract |
The commercial flightdeck is a naturally multi-tasking work environment, one in which interruptions are frequent come in various forms, contributing in many cases to aviation incident reports. Automatic characterization of pilots’ workloads is essential to preventing these kind of incidents. In addition, minimizing the physiological sensor network as much as possible remains both a challenge and a requirement. Electroencephalogram (EEG) signals have shown high correlations with specific cognitive and mental states, such as workload. However, there is not enough evidence in the literature to validate how well models generalize in cases of new subjects performing tasks with workloads similar to the ones included during the model’s training. In this paper, we propose a convolutional neural network to classify EEG features across different mental workloads in a continuous performance task test that partly measures working memory and working memory capacity. Our model is valid at the general population level and it is able to transfer task learning to pilot mental workload recognition in a simulated operational environment. |
|
|
Address |
February 2022 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
IAM; ADAS; 600.139; 600.145; 600.118 |
Approved |
no |
|
|
Call Number |
Admin @ si @ HYF2022 |
Serial |
3720 |
|
Permanent link to this record |