toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Patricia Suarez; Angel Sappa; Boris X. Vintimilla edit   pdf
doi  openurl
  Title Infrared Image Colorization based on a Triplet DCGAN Architecture Type Conference Article
  Year 2017 Publication IEEE Conference on Computer Vision and Pattern Recognition Workshops Abbreviated Journal  
  Volume Issue Pages (up)  
  Keywords  
  Abstract This paper proposes a novel approach for colorizing near infrared (NIR) images using Deep Convolutional Generative Adversarial Network (GAN) architectures. The proposed approach is based on the usage of a triplet model for learning each color channel independently, in a more homogeneous way. It allows a fast convergence during the training, obtaining a greater similarity between the given NIR image and the corresponding ground truth. The proposed approach has been evaluated with a large data set of NIR images and compared with a recent approach, which is also based on a GAN architecture but in this case all the
color channels are obtained at the same time.
 
  Address Honolulu; Hawaii; USA; July 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPRW  
  Notes ADAS; 600.086; 600.118 Approved no  
  Call Number Admin @ si @ SSV2017b Serial 2920  
Permanent link to this record
 

 
Author Patricia Suarez; Angel Sappa; Boris X. Vintimilla edit   pdf
openurl 
  Title Colorizing Infrared Images through a Triplet Conditional DCGAN Architecture Type Conference Article
  Year 2017 Publication 19th international conference on image analysis and processing Abbreviated Journal  
  Volume Issue Pages (up)  
  Keywords CNN in Multispectral Imaging; Image Colorization  
  Abstract This paper focuses on near infrared (NIR) image colorization by using a Conditional Deep Convolutional Generative Adversarial Network (CDCGAN) architecture model. The proposed architecture is based on the usage of a conditional probabilistic generative model. Firstly, it learns to colorize the given input image, by using a triplet model architecture that tackle every channel in an independent way. In the proposed model, the nal layer of red channel consider the infrared image to enhance the details, resulting in a sharp RGB image. Then, in the second stage, a discriminative model is used to estimate the probability that the generated image came from the training dataset, rather than the image automatically generated. Experimental results with a large set of real images are provided showing the validity of the proposed approach. Additionally, the proposed approach is compared with a state of the art approach showing better results.  
  Address Catania; Italy; September 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICIAP  
  Notes ADAS; MSIAU; 600.086; 600.122; 600.118 Approved no  
  Call Number Admin @ si @ SSV2017c Serial 3016  
Permanent link to this record
 

 
Author Konstantia Georgouli; Katerine Diaz; Jesus Martinez del Rincon; Anastasios Koidis edit  openurl
  Title Building generic, easily-updatable chemometric models with harmonisation and augmentation features: The case of FTIR vegetable oils classification Type Conference Article
  Year 2017 Publication 3rd Ιnternational Conference Metrology Promoting Standardization and Harmonization in Food and Nutrition Abbreviated Journal  
  Volume Issue Pages (up)  
  Keywords  
  Abstract  
  Address Thessaloniki; Greece; October 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference IMEKOFOODS  
  Notes ADAS; 600.118 Approved no  
  Call Number Admin @ si @ GDM2017 Serial 3081  
Permanent link to this record
 

 
Author Marc Masana; Idoia Ruiz; Joan Serrat; Joost Van de Weijer; Antonio Lopez edit   pdf
openurl 
  Title Metric Learning for Novelty and Anomaly Detection Type Conference Article
  Year 2018 Publication 29th British Machine Vision Conference Abbreviated Journal  
  Volume Issue Pages (up)  
  Keywords  
  Abstract When neural networks process images which do not resemble the distribution seen during training, so called out-of-distribution images, they often make wrong predictions, and do so too confidently. The capability to detect out-of-distribution images is therefore crucial for many real-world applications. We divide out-of-distribution detection between novelty detection ---images of classes which are not in the training set but are related to those---, and anomaly detection ---images with classes which are unrelated to the training set. By related we mean they contain the same type of objects, like digits in MNIST and SVHN. Most existing work has focused on anomaly detection, and has addressed this problem considering networks trained with the cross-entropy loss. Differently from them, we propose to use metric learning which does not have the drawback of the softmax layer (inherent to cross-entropy methods), which forces the network to divide its prediction power over the learned classes. We perform extensive experiments and evaluate both novelty and anomaly detection, even in a relevant application such as traffic sign recognition, obtaining comparable or better results than previous works.  
  Address Newcastle; uk; September 2018  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference BMVC  
  Notes LAMP; ADAS; 601.305; 600.124; 600.106; 602.200; 600.120; 600.118 Approved no  
  Call Number Admin @ si @ MRS2018 Serial 3156  
Permanent link to this record
 

 
Author Diego Porres edit   pdf
url  openurl
  Title Discriminator Synthesis: On reusing the other half of Generative Adversarial Networks Type Conference Article
  Year 2021 Publication Machine Learning for Creativity and Design, Neurips Workshop Abbreviated Journal  
  Volume Issue Pages (up)  
  Keywords  
  Abstract Generative Adversarial Networks have long since revolutionized the world of computer vision and, tied to it, the world of art. Arduous efforts have gone into fully utilizing and stabilizing training so that outputs of the Generator network have the highest possible fidelity, but little has gone into using the Discriminator after training is complete. In this work, we propose to use the latter and show a way to use the features it has learned from the training dataset to both alter an image and generate one from scratch. We name this method Discriminator Dreaming, and the full code can be found at this https URL.  
  Address Virtual; December 2021  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference NEURIPSW  
  Notes ADAS; 601.365 Approved no  
  Call Number Admin @ si @ Por2021 Serial 3597  
Permanent link to this record
 

 
Author Yi Xiao; Felipe Codevilla; Christopher Pal; Antonio Lopez edit   pdf
openurl 
  Title Action-Based Representation Learning for Autonomous Driving Type Conference Article
  Year 2020 Publication Conference on Robot Learning Abbreviated Journal  
  Volume Issue Pages (up)  
  Keywords  
  Abstract Human drivers produce a vast amount of data which could, in principle, be used to improve autonomous driving systems. Unfortunately, seemingly straightforward approaches for creating end-to-end driving models that map sensor data directly into driving actions are problematic in terms of interpretability, and typically have significant difficulty dealing with spurious correlations. Alternatively, we propose to use this kind of action-based driving data for learning representations. Our experiments show that an affordance-based driving model pre-trained with this approach can leverage a relatively small amount of weakly annotated imagery and outperform pure end-to-end driving models, while being more interpretable. Further, we demonstrate how this strategy outperforms previous methods based on learning inverse dynamics models as well as other methods based on heavy human supervision (ImageNet).  
  Address virtual; November 2020  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CORL  
  Notes ADAS; 600.118 Approved no  
  Call Number Admin @ si @ XCP2020 Serial 3487  
Permanent link to this record
 

 
Author Yi Xiao; Felipe Codevilla; Diego Porres; Antonio Lopez edit  url
openurl 
  Title Scaling Vision-Based End-to-End Autonomous Driving with Multi-View Attention Learning Type Conference Article
  Year 2023 Publication International Conference on Intelligent Robots and Systems Abbreviated Journal  
  Volume Issue Pages (up)  
  Keywords  
  Abstract On end-to-end driving, human driving demonstrations are used to train perception-based driving models by imitation learning. This process is supervised on vehicle signals (e.g., steering angle, acceleration) but does not require extra costly supervision (human labeling of sensor data). As a representative of such vision-based end-to-end driving models, CILRS is commonly used as a baseline to compare with new driving models. So far, some latest models achieve better performance than CILRS by using expensive sensor suites and/or by using large amounts of human-labeled data for training. Given the difference in performance, one may think that it is not worth pursuing vision-based pure end-to-end driving. However, we argue that this approach still has great value and potential considering cost and maintenance. In this paper, we present CIL++, which improves on CILRS by both processing higher-resolution images using a human-inspired HFOV as an inductive bias and incorporating a proper attention mechanism. CIL++ achieves competitive performance compared to models which are more costly to develop. We propose to replace CILRS with CIL++ as a strong vision-based pure end-to-end driving baseline supervised by only vehicle signals and trained by conditional imitation learning.  
  Address Detroit; USA; October 2023  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference IROS  
  Notes ADAS Approved no  
  Call Number Admin @ si @ XCP2023 Serial 3930  
Permanent link to this record
 

 
Author Alexey Dosovitskiy; German Ros; Felipe Codevilla; Antonio Lopez; Vladlen Koltun edit   pdf
openurl 
  Title CARLA: An Open Urban Driving Simulator Type Conference Article
  Year 2017 Publication 1st Annual Conference on Robot Learning. Proceedings of Machine Learning Abbreviated Journal  
  Volume 78 Issue Pages (up) 1-16  
  Keywords Autonomous driving; sensorimotor control; simulation  
  Abstract We introduce CARLA, an open-source simulator for autonomous driving research. CARLA has been developed from the ground up to support development, training, and validation of autonomous urban driving systems. In addition to open-source code and protocols, CARLA provides open digital assets (urban layouts, buildings, vehicles) that were created for this purpose and can be used freely. The simulation platform supports flexible specification of sensor suites and environmental conditions. We use CARLA to study the performance of three approaches to autonomous driving: a classic modular pipeline, an endto-end
model trained via imitation learning, and an end-to-end model trained via
reinforcement learning. The approaches are evaluated in controlled scenarios of
increasing difficulty, and their performance is examined via metrics provided by CARLA, illustrating the platform’s utility for autonomous driving research.
 
  Address Mountain View; CA; USA; November 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CORL  
  Notes ADAS; 600.085; 600.118 Approved no  
  Call Number Admin @ si @ DRC2017 Serial 2988  
Permanent link to this record
 

 
Author Alejandro Gonzalez Alzate; Sebastian Ramos; David Vazquez; Antonio Lopez; Jaume Amores edit   pdf
doi  openurl
  Title Spatiotemporal Stacked Sequential Learning for Pedestrian Detection Type Conference Article
  Year 2015 Publication Pattern Recognition and Image Analysis, Proceedings of 7th Iberian Conference , ibPRIA 2015 Abbreviated Journal  
  Volume Issue Pages (up) 3-12  
  Keywords SSL; Pedestrian Detection  
  Abstract Pedestrian classifiers decide which image windows contain a pedestrian. In practice, such classifiers provide a relatively high response at neighbor windows overlapping a pedestrian, while the responses around potential false positives are expected to be lower. An analogous reasoning applies for image sequences. If there is a pedestrian located within a frame, the same pedestrian is expected to appear close to the same location in neighbor frames. Therefore, such a location has chances of receiving high classification scores during several frames, while false positives are expected to be more spurious. In this paper we propose to exploit such correlations for improving the accuracy of base pedestrian classifiers. In particular, we propose to use two-stage classifiers which not only rely on the image descriptors required by the base classifiers but also on the response of such base classifiers in a given spatiotemporal neighborhood. More specifically, we train pedestrian classifiers using a stacked sequential learning (SSL) paradigm. We use a new pedestrian dataset we have acquired from a car to evaluate our proposal at different frame rates. We also test on a well known dataset: Caltech. The obtained results show that our SSL proposal boosts detection accuracy significantly with a minimal impact on the computational cost. Interestingly, SSL improves more the accuracy at the most dangerous situations, i.e. when a pedestrian is close to the camera.  
  Address Santiago de Compostela; España; June 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area ACDC Expedition Conference IbPRIA  
  Notes ADAS; 600.057; 600.054; 600.076 Approved no  
  Call Number GRV2015; ADAS @ adas @ GRV2015 Serial 2454  
Permanent link to this record
 

 
Author Diego Cheda; Daniel Ponsa; Antonio Lopez edit   pdf
doi  isbn
openurl 
  Title Pedestrian Candidates Generation using Monocular Cues Type Conference Article
  Year 2012 Publication IEEE Intelligent Vehicles Symposium Abbreviated Journal  
  Volume Issue Pages (up) 7-12  
  Keywords pedestrian detection  
  Abstract Common techniques for pedestrian candidates generation (e.g., sliding window approaches) are based on an exhaustive search over the image. This implies that the number of windows produced is huge, which translates into a significant time consumption in the classification stage. In this paper, we propose a method that significantly reduces the number of windows to be considered by a classifier. Our method is a monocular one that exploits geometric and depth information available on single images. Both representations of the world are fused together to generate pedestrian candidates based on an underlying model which is focused only on objects standing vertically on the ground plane and having certain height, according with their depths on the scene. We evaluate our algorithm on a challenging dataset and demonstrate its application for pedestrian detection, where a considerable reduction in the number of candidate windows is reached.  
  Address  
  Corporate Author Thesis  
  Publisher IEEE Xplore Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1931-0587 ISBN 978-1-4673-2119-8 Medium  
  Area Expedition Conference IV  
  Notes ADAS Approved no  
  Call Number Admin @ si @ CPL2012c; ADAS @ adas @ cpl2012d Serial 2013  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: