|
Records |
Links |
|
Author |
Patricia Suarez; Angel Sappa; Boris X. Vintimilla |
|
|
Title |
Colorizing Infrared Images through a Triplet Conditional DCGAN Architecture |
Type |
Conference Article |
|
Year |
2017 |
Publication |
19th international conference on image analysis and processing |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
CNN in Multispectral Imaging; Image Colorization |
|
|
Abstract |
This paper focuses on near infrared (NIR) image colorization by using a Conditional Deep Convolutional Generative Adversarial Network (CDCGAN) architecture model. The proposed architecture is based on the usage of a conditional probabilistic generative model. Firstly, it learns to colorize the given input image, by using a triplet model architecture that tackle every channel in an independent way. In the proposed model, the nal layer of red channel consider the infrared image to enhance the details, resulting in a sharp RGB image. Then, in the second stage, a discriminative model is used to estimate the probability that the generated image came from the training dataset, rather than the image automatically generated. Experimental results with a large set of real images are provided showing the validity of the proposed approach. Additionally, the proposed approach is compared with a state of the art approach showing better results. |
|
|
Address |
Catania; Italy; September 2017 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICIAP |
|
|
Notes |
ADAS; MSIAU; 600.086; 600.122; 600.118 |
Approved |
no |
|
|
Call Number |
Admin @ si @ SSV2017c |
Serial |
3016 |
|
Permanent link to this record |
|
|
|
|
Author |
Santi Puch; Irina Sanchez; Aura Hernandez-Sabate; Gemma Piella; Vesna Prckovska |
|
|
Title |
Global Planar Convolutions for Improved Context Aggregation in Brain Tumor Segmentation |
Type |
Conference Article |
|
Year |
2018 |
Publication |
International MICCAI Brainlesion Workshop |
Abbreviated Journal |
|
|
|
Volume |
11384 |
Issue |
|
Pages |
393-405 |
|
|
Keywords |
Brain tumors; 3D fully-convolutional CNN; Magnetic resonance imaging; Global planar convolution |
|
|
Abstract |
In this work, we introduce the Global Planar Convolution module as a building-block for fully-convolutional networks that aggregates global information and, therefore, enhances the context perception capabilities of segmentation networks in the context of brain tumor segmentation. We implement two baseline architectures (3D UNet and a residual version of 3D UNet, ResUNet) and present a novel architecture based on these two architectures, ContextNet, that includes the proposed Global Planar Convolution module. We show that the addition of such module eliminates the need of building networks with several representation levels, which tend to be over-parametrized and to showcase slow rates of convergence. Furthermore, we provide a visual demonstration of the behavior of GPC modules via visualization of intermediate representations. We finally participate in the 2018 edition of the BraTS challenge with our best performing models, that are based on ContextNet, and report the evaluation scores on the validation and the test sets of the challenge. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
MICCAIW |
|
|
Notes |
ADAS; 600.118 |
Approved |
no |
|
|
Call Number |
Admin @ si @ PSH2018 |
Serial |
3251 |
|
Permanent link to this record |
|
|
|
|
Author |
Fernando Barrera; Felipe Lumbreras; Angel Sappa |
|
|
Title |
Evaluation of Similarity Functions in Multimodal Stereo |
Type |
Conference Article |
|
Year |
2012 |
Publication |
9th International Conference on Image Analysis and Recognition |
Abbreviated Journal |
|
|
|
Volume |
7324 |
Issue |
I |
Pages |
320-329 |
|
|
Keywords |
Aveiro, Portugal |
|
|
Abstract |
This paper presents an evaluation framework for multimodal stereo matching, which allows to compare the performance of four similarity functions. Additionally, it presents details of a multimodal stereo head that supply thermal infrared and color images, as well as, aspects of its calibration and rectification. The pipeline includes a novel method for the disparity selection, which is suitable for evaluating the similarity functions. Finally, a benchmark for comparing different initializations of the proposed framework is presented. Similarity functions are based on mutual information, gradient orientation and scale space representations. Their evaluation is performed using two metrics: i) disparity error, and ii) number of correct matches on planar regions. In addition to the proposed evaluation, the current paper also shows that 3D sparse representations can be recovered from such a multimodal stereo head. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer Berlin Heidelberg |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0302-9743 |
ISBN |
978-3-642-31294-6 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICIAR |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
BLS2012a |
Serial |
2014 |
|
Permanent link to this record |
|
|
|
|
Author |
Daniel Hernandez; Alejandro Chacon; Antonio Espinosa; David Vazquez; Juan Carlos Moure; Antonio Lopez |
|
|
Title |
Embedded real-time stereo estimation via Semi-Global Matching on the GPU |
Type |
Conference Article |
|
Year |
2016 |
Publication |
16th International Conference on Computational Science |
Abbreviated Journal |
|
|
|
Volume |
80 |
Issue |
|
Pages |
143-153 |
|
|
Keywords |
Autonomous Driving; Stereo; CUDA; 3d reconstruction |
|
|
Abstract |
Dense, robust and real-time computation of depth information from stereo-camera systems is a computationally demanding requirement for robotics, advanced driver assistance systems (ADAS) and autonomous vehicles. Semi-Global Matching (SGM) is a widely used algorithm that propagates consistency constraints along several paths across the image. This work presents a real-time system producing reliable disparity estimation results on the new embedded energy-efficient GPU devices. Our design runs on a Tegra X1 at 41 frames per second for an image size of 640x480, 128 disparity levels, and using 4 path directions for the SGM method. |
|
|
Address |
San Diego; CA; USA; June 2016 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICCS |
|
|
Notes |
ADAS; 600.085; 600.082; 600.076 |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ HCE2016a |
Serial |
2740 |
|
Permanent link to this record |
|
|
|
|
Author |
Alexey Dosovitskiy; German Ros; Felipe Codevilla; Antonio Lopez; Vladlen Koltun |
|
|
Title |
CARLA: An Open Urban Driving Simulator |
Type |
Conference Article |
|
Year |
2017 |
Publication |
1st Annual Conference on Robot Learning. Proceedings of Machine Learning |
Abbreviated Journal |
|
|
|
Volume |
78 |
Issue |
|
Pages |
1-16 |
|
|
Keywords |
Autonomous driving; sensorimotor control; simulation |
|
|
Abstract |
We introduce CARLA, an open-source simulator for autonomous driving research. CARLA has been developed from the ground up to support development, training, and validation of autonomous urban driving systems. In addition to open-source code and protocols, CARLA provides open digital assets (urban layouts, buildings, vehicles) that were created for this purpose and can be used freely. The simulation platform supports flexible specification of sensor suites and environmental conditions. We use CARLA to study the performance of three approaches to autonomous driving: a classic modular pipeline, an endto-end
model trained via imitation learning, and an end-to-end model trained via
reinforcement learning. The approaches are evaluated in controlled scenarios of
increasing difficulty, and their performance is examined via metrics provided by CARLA, illustrating the platform’s utility for autonomous driving research. |
|
|
Address |
Mountain View; CA; USA; November 2017 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CORL |
|
|
Notes |
ADAS; 600.085; 600.118 |
Approved |
no |
|
|
Call Number |
Admin @ si @ DRC2017 |
Serial |
2988 |
|
Permanent link to this record |
|
|
|
|
Author |
German Ros; Sebastian Ramos; Manuel Granados; Amir Bakhtiary; David Vazquez; Antonio Lopez |
|
|
Title |
Vision-based Offline-Online Perception Paradigm for Autonomous Driving |
Type |
Conference Article |
|
Year |
2015 |
Publication |
IEEE Winter Conference on Applications of Computer Vision |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
231 - 238 |
|
|
Keywords |
Autonomous Driving; Scene Understanding; SLAM; Semantic Segmentation |
|
|
Abstract |
Autonomous driving is a key factor for future mobility. Properly perceiving the environment of the vehicles is essential for a safe driving, which requires computing accurate geometric and semantic information in real-time. In this paper, we challenge state-of-the-art computer vision algorithms for building a perception system for autonomous driving. An inherent drawback in the computation of visual semantics is the trade-off between accuracy and computational cost. We propose to circumvent this problem by following an offline-online strategy. During the offline stage dense 3D semantic maps are created. In the online stage the current driving area is recognized in the maps via a re-localization process, which allows to retrieve the pre-computed accurate semantics and 3D geometry in realtime. Then, detecting the dynamic obstacles we obtain a rich understanding of the current scene. We evaluate quantitatively our proposal in the KITTI dataset and discuss the related open challenges for the computer vision community. |
|
|
Address |
Hawaii; January 2015 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
ACDC |
Expedition |
|
Conference |
WACV |
|
|
Notes |
ADAS; 600.076 |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ RRG2015 |
Serial |
2499 |
|
Permanent link to this record |
|
|
|
|
Author |
Daniel Hernandez; Antonio Espinosa; David Vazquez; Antonio Lopez; Juan Carlos Moure |
|
|
Title |
GPU-accelerated real-time stixel computation |
Type |
Conference Article |
|
Year |
2017 |
Publication |
IEEE Winter Conference on Applications of Computer Vision |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
1054-1062 |
|
|
Keywords |
Autonomous Driving; GPU; Stixel |
|
|
Abstract |
The Stixel World is a medium-level, compact representation of road scenes that abstracts millions of disparity pixels into hundreds or thousands of stixels. The goal of this work is to implement and evaluate a complete multi-stixel estimation pipeline on an embedded, energyefficient, GPU-accelerated device. This work presents a full GPU-accelerated implementation of stixel estimation that produces reliable results at 26 frames per second (real-time) on the Tegra X1 for disparity images of 1024×440 pixels and stixel widths of 5 pixels, and achieves more than 400 frames per second on a high-end Titan X GPU card. |
|
|
Address |
Santa Rosa; CA; USA; March 2017 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
WACV |
|
|
Notes |
ADAS; 600.118 |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ HEV2017b |
Serial |
2812 |
|
Permanent link to this record |
|
|
|
|
Author |
Felipe Codevilla; Antonio Lopez; Vladlen Koltun; Alexey Dosovitskiy |
|
|
Title |
On Offline Evaluation of Vision-based Driving Models |
Type |
Conference Article |
|
Year |
2018 |
Publication |
15th European Conference on Computer Vision |
Abbreviated Journal |
|
|
|
Volume |
11219 |
Issue |
|
Pages |
246-262 |
|
|
Keywords |
Autonomous driving; deep learning |
|
|
Abstract |
Autonomous driving models should ideally be evaluated by deploying
them on a fleet of physical vehicles in the real world. Unfortunately, this approach is not practical for the vast majority of researchers. An attractive alternative is to evaluate models offline, on a pre-collected validation dataset with ground truth annotation. In this paper, we investigate the relation between various online and offline metrics for evaluation of autonomous driving models. We find that offline prediction error is not necessarily correlated with driving quality, and two models with identical prediction error can differ dramatically in their driving performance. We show that the correlation of offline evaluation with driving quality can be significantly improved by selecting an appropriate validation dataset and
suitable offline metrics. |
|
|
Address |
Munich; September 2018 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ECCV |
|
|
Notes |
ADAS; 600.124; 600.118 |
Approved |
no |
|
|
Call Number |
Admin @ si @ CLK2018 |
Serial |
3162 |
|
Permanent link to this record |
|
|
|
|
Author |
Eugenio Alcala; Laura Sellart; Vicenc Puig; Joseba Quevedo; Jordi Saludes; David Vazquez; Antonio Lopez |
|
|
Title |
Comparison of two non-linear model-based control strategies for autonomous vehicles |
Type |
Conference Article |
|
Year |
2016 |
Publication |
24th Mediterranean Conference on Control and Automation |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
846-851 |
|
|
Keywords |
Autonomous Driving; Control |
|
|
Abstract |
This paper presents the comparison of two nonlinear model-based control strategies for autonomous cars. A control oriented model of vehicle based on a bicycle model is used. The two control strategies use a model reference approach. Using this approach, the error dynamics model is developed. Both controllers receive as input the longitudinal, lateral and orientation errors generating as control outputs the steering angle and the velocity of the vehicle. The first control approach is based on a non-linear control law that is designed by means of the Lyapunov direct approach. The second approach is based on a sliding mode-control that defines a set of sliding surfaces over which the error trajectories will converge. The main advantage of the sliding-control technique is the robustness against non-linearities and parametric uncertainties in the model. However, the main drawback of first order sliding mode is the chattering, so it has been implemented a high order sliding mode control. To test and compare the proposed control strategies, different path following scenarios are used in simulation. |
|
|
Address |
Athens; Greece; June 2016 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
MED |
|
|
Notes |
ADAS; 600.085; 600.082; 600.076 |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ ASP2016 |
Serial |
2750 |
|
Permanent link to this record |
|
|
|
|
Author |
Javier Marin; David Vazquez; Antonio Lopez; Jaume Amores; Bastian Leibe |
|
|
Title |
Random Forests of Local Experts for Pedestrian Detection |
Type |
Conference Article |
|
Year |
2013 |
Publication |
15th IEEE International Conference on Computer Vision |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
2592 - 2599 |
|
|
Keywords |
ADAS; Random Forest; Pedestrian Detection |
|
|
Abstract |
Pedestrian detection is one of the most challenging tasks in computer vision, and has received a lot of attention in the last years. Recently, some authors have shown the advantages of using combinations of part/patch-based detectors in order to cope with the large variability of poses and the existence of partial occlusions. In this paper, we propose a pedestrian detection method that efficiently combines multiple local experts by means of a Random Forest ensemble. The proposed method works with rich block-based representations such as HOG and LBP, in such a way that the same features are reused by the multiple local experts, so that no extra computational cost is needed with respect to a holistic method. Furthermore, we demonstrate how to integrate the proposed approach with a cascaded architecture in order to achieve not only high accuracy but also an acceptable efficiency. In particular, the resulting detector operates at five frames per second using a laptop machine. We tested the proposed method with well-known challenging datasets such as Caltech, ETH, Daimler, and INRIA. The method proposed in this work consistently ranks among the top performers in all the datasets, being either the best method or having a small difference with the best one. |
|
|
Address |
Sydney; Australia; December 2013 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
IEEE |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1550-5499 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICCV |
|
|
Notes |
ADAS; 600.057; 600.054 |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ MVL2013 |
Serial |
2333 |
|
Permanent link to this record |