toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Gabriel Villalonga; Joost Van de Weijer; Antonio Lopez edit  url
doi  openurl
  Title Recognizing new classes with synthetic data in the loop: application to traffic sign recognition Type Journal Article
  Year 2020 Publication Sensors Abbreviated Journal (up) SENS  
  Volume 20 Issue 3 Pages 583  
  Keywords  
  Abstract On-board vision systems may need to increase the number of classes that can be recognized in a relatively short period. For instance, a traffic sign recognition system may suddenly be required to recognize new signs. Since collecting and annotating samples of such new classes may need more time than we wish, especially for uncommon signs, we propose a method to generate these samples by combining synthetic images and Generative Adversarial Network (GAN) technology. In particular, the GAN is trained on synthetic and real-world samples from known classes to perform synthetic-to-real domain adaptation, but applied to synthetic samples of the new classes. Using the Tsinghua dataset with a synthetic counterpart, SYNTHIA-TS, we have run an extensive set of experiments. The results show that the proposed method is indeed effective, provided that we use a proper Convolutional Neural Network (CNN) to perform the traffic sign recognition (classification) task as well as a proper GAN to transform the synthetic images. Here, a ResNet101-based classifier and domain adaptation based on CycleGAN performed extremely well for a ratio∼ 1/4 for new/known classes; even for more challenging ratios such as∼ 4/1, the results are also very positive.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes LAMP; ADAS; 600.118; 600.120;CIC Approved no  
  Call Number Admin @ si @ VWL2020 Serial 3405  
Permanent link to this record
 

 
Author Jose Luis Gomez; Gabriel Villalonga; Antonio Lopez edit   pdf
url  openurl
  Title Co-Training for Deep Object Detection: Comparing Single-Modal and Multi-Modal Approaches Type Journal Article
  Year 2021 Publication Sensors Abbreviated Journal (up) SENS  
  Volume 21 Issue 9 Pages 3185  
  Keywords co-training; multi-modality; vision-based object detection; ADAS; self-driving  
  Abstract Top-performing computer vision models are powered by convolutional neural networks (CNNs). Training an accurate CNN highly depends on both the raw sensor data and their associated ground truth (GT). Collecting such GT is usually done through human labeling, which is time-consuming and does not scale as we wish. This data-labeling bottleneck may be intensified due to domain shifts among image sensors, which could force per-sensor data labeling. In this paper, we focus on the use of co-training, a semi-supervised learning (SSL) method, for obtaining self-labeled object bounding boxes (BBs), i.e., the GT to train deep object detectors. In particular, we assess the goodness of multi-modal co-training by relying on two different views of an image, namely, appearance (RGB) and estimated depth (D). Moreover, we compare appearance-based single-modal co-training with multi-modal. Our results suggest that in a standard SSL setting (no domain shift, a few human-labeled data) and under virtual-to-real domain shift (many virtual-world labeled data, no human-labeled data) multi-modal co-training outperforms single-modal. In the latter case, by performing GAN-based domain translation both co-training modalities are on par, at least when using an off-the-shelf depth estimation model not specifically trained on the translated images.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.118 Approved no  
  Call Number Admin @ si @ GVL2021 Serial 3562  
Permanent link to this record
 

 
Author Idoia Ruiz; Joan Serrat edit  doi
openurl 
  Title Hierarchical Novelty Detection for Traffic Sign Recognition Type Journal Article
  Year 2022 Publication Sensors Abbreviated Journal (up) SENS  
  Volume 22 Issue 12 Pages 4389  
  Keywords Novelty detection; hierarchical classification; deep learning; traffic sign recognition; autonomous driving; computer vision  
  Abstract Recent works have made significant progress in novelty detection, i.e., the problem of detecting samples of novel classes, never seen during training, while classifying those that belong to known classes. However, the only information this task provides about novel samples is that they are unknown. In this work, we leverage hierarchical taxonomies of classes to provide informative outputs for samples of novel classes. We predict their closest class in the taxonomy, i.e., its parent class. We address this problem, known as hierarchical novelty detection, by proposing a novel loss, namely Hierarchical Cosine Loss that is designed to learn class prototypes along with an embedding of discriminative features consistent with the taxonomy. We apply it to traffic sign recognition, where we predict the parent class semantics for new types of traffic signs. Our model beats state-of-the art approaches on two large scale traffic sign benchmarks, Mapillary Traffic Sign Dataset (MTSD) and Tsinghua-Tencent 100K (TT100K), and performs similarly on natural images benchmarks (AWA2, CUB). For TT100K and MTSD, our approach is able to detect novel samples at the correct nodes of the hierarchy with 81% and 36% of accuracy, respectively, at 80% known class accuracy.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.154 Approved no  
  Call Number Admin @ si @ RuS2022 Serial 3684  
Permanent link to this record
 

 
Author Jose Luis Gomez; Gabriel Villalonga; Antonio Lopez edit  url
openurl 
  Title Co-Training for Unsupervised Domain Adaptation of Semantic Segmentation Models Type Journal Article
  Year 2023 Publication Sensors – Special Issue on “Machine Learning for Autonomous Driving Perception and Prediction” Abbreviated Journal (up) SENS  
  Volume 23 Issue 2 Pages 621  
  Keywords Domain adaptation; semi-supervised learning; Semantic segmentation; Autonomous driving  
  Abstract Semantic image segmentation is a central and challenging task in autonomous driving, addressed by training deep models. Since this training draws to a curse of human-based image labeling, using synthetic images with automatically generated labels together with unlabeled real-world images is a promising alternative. This implies to address an unsupervised domain adaptation (UDA) problem. In this paper, we propose a new co-training procedure for synth-to-real UDA of semantic
segmentation models. It consists of a self-training stage, which provides two domain-adapted models, and a model collaboration loop for the mutual improvement of these two models. These models are then used to provide the final semantic segmentation labels (pseudo-labels) for the real-world images. The overall
procedure treats the deep models as black boxes and drives their collaboration at the level of pseudo-labeled target images, i.e., neither modifying loss functions is required, nor explicit feature alignment. We test our proposal on standard synthetic and real-world datasets for on-board semantic segmentation. Our
procedure shows improvements ranging from ∼13 to ∼26 mIoU points over baselines, so establishing new state-of-the-art results.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; no proj Approved no  
  Call Number Admin @ si @ GVL2023 Serial 3705  
Permanent link to this record
 

 
Author Fei Yang; Luis Herranz; Joost Van de Weijer; Jose Antonio Iglesias; Antonio Lopez; Mikhail Mozerov edit   pdf
url  doi
openurl 
  Title Variable Rate Deep Image Compression with Modulated Autoencoder Type Journal Article
  Year 2020 Publication IEEE Signal Processing Letters Abbreviated Journal (up) SPL  
  Volume 27 Issue Pages 331-335  
  Keywords  
  Abstract Variable rate is a requirement for flexible and adaptable image and video compression. However, deep image compression methods (DIC) are optimized for a single fixed rate-distortion (R-D) tradeoff. While this can be addressed by training multiple models for different tradeoffs, the memory requirements increase proportionally to the number of models. Scaling the bottleneck representation of a shared autoencoder can provide variable rate compression with a single shared autoencoder. However, the R-D performance using this simple mechanism degrades in low bitrates, and also shrinks the effective range of bitrates. To address these limitations, we formulate the problem of variable R-D optimization for DIC, and propose modulated autoencoders (MAEs), where the representations of a shared autoencoder are adapted to the specific R-D tradeoff via a modulation network. Jointly training this modulated autoencoder and the modulation network provides an effective way to navigate the R-D operational curve. Our experiments show that the proposed method can achieve almost the same R-D performance of independent models with significantly fewer parameters.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes LAMP; ADAS; 600.141; 600.120; 600.118;ISE;CIC Approved no  
  Call Number Admin @ si @ YHW2020 Serial 3346  
Permanent link to this record
 

 
Author David Geronimo; Joan Serrat; Antonio Lopez; Ramon Baldrich edit   pdf
doi  openurl
  Title Traffic sign recognition for computer vision project-based learning Type Journal Article
  Year 2013 Publication IEEE Transactions on Education Abbreviated Journal (up) T-EDUC  
  Volume 56 Issue 3 Pages 364-371  
  Keywords traffic signs  
  Abstract This paper presents a graduate course project on computer vision. The aim of the project is to detect and recognize traffic signs in video sequences recorded by an on-board vehicle camera. This is a demanding problem, given that traffic sign recognition is one of the most challenging problems for driving assistance systems. Equally, it is motivating for the students given that it is a real-life problem. Furthermore, it gives them the opportunity to appreciate the difficulty of real-world vision problems and to assess the extent to which this problem can be solved by modern computer vision and pattern classification techniques taught in the classroom. The learning objectives of the course are introduced, as are the constraints imposed on its design, such as the diversity of students' background and the amount of time they and their instructors dedicate to the course. The paper also describes the course contents, schedule, and how the project-based learning approach is applied. The outcomes of the course are discussed, including both the students' marks and their personal feedback.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0018-9359 ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; CIC Approved no  
  Call Number Admin @ si @ GSL2013; ADAS @ adas @ Serial 2160  
Permanent link to this record
 

 
Author Aura Hernandez-Sabate; Debora Gil; Jaume Garcia; Enric Marti edit   pdf
doi  openurl
  Title Image-based Cardiac Phase Retrieval in Intravascular Ultrasound Sequences Type Journal Article
  Year 2011 Publication IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control Abbreviated Journal (up) T-UFFC  
  Volume 58 Issue 1 Pages 60-72  
  Keywords 3-D exploring; ECG; band-pass filter; cardiac motion; cardiac phase retrieval; coronary arteries; electrocardiogram signal; image intensity local mean evolution; image-based cardiac phase retrieval; in vivo pullbacks acquisition; intravascular ultrasound sequences; longitudinal motion; signal extrema; time 36 ms; band-pass filters; biomedical ultrasonics; cardiovascular system; electrocardiography; image motion analysis; image retrieval; image sequences; medical image processing; ultrasonic imaging  
  Abstract Longitudinal motion during in vivo pullbacks acquisition of intravascular ultrasound (IVUS) sequences is a major artifact for 3-D exploring of coronary arteries. Most current techniques are based on the electrocardiogram (ECG) signal to obtain a gated pullback without longitudinal motion by using specific hardware or the ECG signal itself. We present an image-based approach for cardiac phase retrieval from coronary IVUS sequences without an ECG signal. A signal reflecting cardiac motion is computed by exploring the image intensity local mean evolution. The signal is filtered by a band-pass filter centered at the main cardiac frequency. Phase is retrieved by computing signal extrema. The average frame processing time using our setup is 36 ms. Comparison to manually sampled sequences encourages a deeper study comparing them to ECG signals.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0885-3010 ISBN Medium  
  Area Expedition Conference  
  Notes IAM;ADAS Approved no  
  Call Number IAM @ iam @ HGG2011 Serial 1546  
Permanent link to this record
 

 
Author Daniel Ponsa; Joan Serrat; Antonio Lopez edit   pdf
doi  openurl
  Title On-board image-based vehicle detection and tracking Type Journal Article
  Year 2011 Publication Transactions of the Institute of Measurement and Control Abbreviated Journal (up) TIM  
  Volume 33 Issue 7 Pages 783-805  
  Keywords vehicle detection  
  Abstract In this paper we present a computer vision system for daytime vehicle detection and localization, an essential step in the development of several types of advanced driver assistance systems. It has a reduced processing time and high accuracy thanks to the combination of vehicle detection with lane-markings estimation and temporal tracking of both vehicles and lane markings. Concerning vehicle detection, our main contribution is a frame scanning process that inspects images according to the geometry of image formation, and with an Adaboost-based detector that is robust to the variability in the different vehicle types (car, van, truck) and lighting conditions. In addition, we propose a new method to estimate the most likely three-dimensional locations of vehicles on the road ahead. With regards to the lane-markings estimation component, we have two main contributions. First, we employ a different image feature to the other commonly used edges: we use ridges, which are better suited to this problem. Second, we adapt RANSAC, a generic robust estimation method, to fit a parametric model of a pair of lane markings to the image features. We qualitatively assess our vehicle detection system in sequences captured on several road types and under very different lighting conditions. The processed videos are available on a web page associated with this paper. A quantitative evaluation of the system has shown quite accurate results (a low number of false positives and negatives) at a reasonable computation time.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ PSL2011 Serial 1413  
Permanent link to this record
 

 
Author Ferran Diego; Daniel Ponsa; Joan Serrat; Antonio Lopez edit   pdf
openurl 
  Title Video Alignment for Change Detection Type Journal Article
  Year 2011 Publication IEEE Transactions on Image Processing Abbreviated Journal (up) TIP  
  Volume 20 Issue 7 Pages 1858-1869  
  Keywords video alignment  
  Abstract In this work, we address the problem of aligning two video sequences. Such alignment refers to synchronization, i.e., the establishment of temporal correspondence between frames of the first and second video, followed by spatial registration of all the temporally corresponding frames. Video synchronization and alignment have been attempted before, but most often in the relatively simple cases of fixed or rigidly attached cameras and simultaneous acquisition. In addition, restrictive assumptions have been applied, including linear time correspondence or the knowledge of the complete trajectories of corresponding scene points; to some extent, these assumptions limit the practical applicability of any solutions developed. We intend to solve the more general problem of aligning video sequences recorded by independently moving cameras that follow similar trajectories, based only on the fusion of image intensity and GPS information. The novelty of our approach is to pose the synchronization as a MAP inference problem on a Bayesian network including the observations from these two sensor types, which have been proved complementary. Alignment results are presented in the context of videos recorded from vehicles driving along the same track at different times, for different road types. In addition, we explore two applications of the proposed video alignment method, both based on change detection between aligned videos. One is the detection of vehicles, which could be of use in ADAS. The other is online difference spotting videos of surveillance rounds.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; IF Approved no  
  Call Number DPS 2011; ADAS @ adas @ dps2011 Serial 1705  
Permanent link to this record
 

 
Author Mohammad Rouhani; Angel Sappa edit   pdf
doi  openurl
  Title Implicit Polynomial Representation through a Fast Fitting Error Estimation Type Journal Article
  Year 2012 Publication IEEE Transactions on Image Processing Abbreviated Journal (up) TIP  
  Volume 21 Issue 4 Pages 2089-2098  
  Keywords  
  Abstract Impact Factor
This paper presents a simple distance estimation for implicit polynomial fitting. It is computed as the height of a simplex built between the point and the surface (i.e., a triangle in 2-D or a tetrahedron in 3-D), which is used as a coarse but reliable estimation of the orthogonal distance. The proposed distance can be described as a function of the coefficients of the implicit polynomial. Moreover, it is differentiable and has a smooth behavior . Hence, it can be used in any gradient-based optimization. In this paper, its use in a Levenberg-Marquardt framework is shown, which is particularly devoted for nonlinear least squares problems. The proposed estimation is a generalization of the gradient-based distance estimation, which is widely used in the literature. Experimental results, both in 2-D and 3-D data sets, are provided. Comparisons with state-of-the-art techniques are presented, showing the advantages of the proposed approach.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1057-7149 ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number Admin @ si @ RoS2012b; ADAS @ adas @ Serial 1937  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: