Publicacions CVC
Home
|
Show All
|
Simple Search
|
Advanced Search
|
Add Record
|
Import
You must login to submit this form!
Login
Quick Search:
Field:
main fields
author
title
publication
keywords
abstract
created_date
call_number
contains:
...
Edit the following record:
Author
...
is Editor
Title
...
Type
Journal Article
Abstract
Book Chapter
Book Whole
Conference Article
Conference Volume
Journal
Magazine Article
Manual
Manuscript
Map
Miscellaneous
Newspaper Article
Patent
Report
Software
Year
...
Publication
...
Abbreviated Journal
...
Volume
...
Issue
...
Pages
...
Keywords
...
Abstract
Manually annotating images to develop vision models has been a major bottleneck since computer vision and machine learning started to walk together. This has been more evident since computer vision falls on the shoulders of data-hungry deep learning techniques. When addressing on-board perception for autonomous driving, the curse of data annotation is exacerbated due to the use of additional sensors such as LiDAR. Therefore, any approach aiming at reducing such a timeconsuming and costly work is of high interest for addressing autonomous driving and, in fact, for any application requiring some sort of artificial perception. In the last decade, it has been shown that leveraging from synthetic data is a paradigm worth to pursue in order to minimizing manual data annotation. The reason is that the automatic process of generating synthetic data can also produce different types of associated annotations (e.g. object bounding boxes for synthetic images and LiDAR pointclouds, pixel/point-wise semantic information, etc.). Directly using synthetic data for training deep perception models may not be the definitive solution in all circumstances since it can appear a synth-to-real domain shift. In this context, this work focuses on leveraging synthetic data to alleviate manual annotation for three perception tasks related to driving assistance and autonomous driving. In all cases, we assume the use of deep convolutional neural networks (CNNs) to develop our perception models. The first task addresses traffic sign recognition (TSR), a kind of multi-class classification problem. We assume that the number of sign classes to be recognized must be suddenly increased without having annotated samples to perform the corresponding TSR CNN re-training. We show that leveraging synthetic samples of such new classes and transforming them by a generative adversarial network (GAN) trained on the known classes (i.e. without using samples from the new classes), it is possible to re-train the TSR CNN to properly classify all the signs for a ∼ 1/4 ratio of new/known sign classes. The second task addresses on-board 2D object detection, focusing on vehicles and pedestrians. In this case, we assume that we receive a set of images without the annotations required to train an object detector, i.e. without object bounding boxes. Therefore, our goal is to self-annotate these images so that they can later be used to train the desired object detector. In order to reach this goal, we leverage from synthetic data and propose a semi-supervised learning approach based on the co-training idea. In fact, we use a GAN to reduce the synthto-real domain shift before applying co-training. Our quantitative results show that co-training and GAN-based image-to-image translation complement each other up to allow the training of object detectors without manual annotation, and still almost reaching the upper-bound performances of the detectors trained from human annotations. While in previous tasks we focus on vision-based perception, the third task we address focuses on LiDAR pointclouds. Our initial goal was to develop a 3D object detector trained on synthetic LiDAR-style pointclouds. While for images we may expect synth/real-to-real domain shift due to differences in their appearance (e.g. when source and target images come from different camera sensors), we did not expect so for LiDAR pointclouds since these active sensors factor out appearance and provide sampled shapes. However, in practice, we have seen that it can be domain shift even among real-world LiDAR pointclouds. Factors such as the sampling parameters of the LiDARs, the sensor suite configuration onboard the ego-vehicle, and the human annotation of 3D bounding boxes, do induce a domain shift. We show it through comprehensive experiments with different publicly available datasets and 3D detectors. This redirected our goal towards the design of a GAN for pointcloud-to-pointcloud translation, a relatively unexplored topic. Finally, it is worth to mention that all the synthetic datasets used for these three tasks, have been designed and generated in the context of this PhD work and will be publicly released. Overall, we think this PhD presents several steps forward to encourage leveraging synthetic data for developing deep perception models in the field of driving assistance and autonomous driving.
Address
...
Corporate Author
...
Thesis
Bachelor's thesis
Master's thesis
Ph.D. thesis
Diploma thesis
Doctoral thesis
Habilitation thesis
Publisher
...
Place of Publication
...
Editor
...
Language
...
Summary Language
...
Original Title
...
Series Editor
...
Series Title
...
Abbreviated Series Title
...
Series Volume
...
Series Issue
...
Edition
...
ISSN
...
ISBN
...
Medium
...
Area
...
Expedition
...
Conference
...
Notes
...
Approved
yes
no
Location
Call Number
...
Serial
Marked
yes
no
Copy
true
fetch
ordered
false
Selected
yes
no
User Keys
...
User Notes
...
User File
...
User Groups
...
Cite Key
...
Related
...
File
URL
...
DOI
...
Online publication. Cite with this text:
...
Location Field:
don't touch
add
remove
my name & email address
Home
SQL Search
|
Library Search
|
Show Record
|
Extract Citations
Help