toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Jose Manuel Alvarez; Theo Gevers; Antonio Lopez edit  url
doi  openurl
  Title Evaluating Color Representation for Online Road Detection Type Conference Article
  Year 2013 Publication ICCV Workshop on Computer Vision in Vehicle Technology: From Earth to Mars Abbreviated Journal  
  Volume Issue Pages 594-595  
  Keywords  
  Abstract Detecting traversable road areas ahead a moving vehicle is a key process for modern autonomous driving systems. Most existing algorithms use color to classify pixels as road or background. These algorithms reduce the effect of lighting variations and weather conditions by exploiting the discriminant/invariant properties of different color representations. However, up to date, no comparison between these representations have been conducted. Therefore, in this paper, we perform an evaluation of existing color representations for road detection. More specifically, we focus on color planes derived from RGB data and their most com-
mon combinations. The evaluation is done on a set of 7000 road images acquired
using an on-board camera in different real-driving situations.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title (up)  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVVT:E2M  
  Notes ADAS;ISE Approved no  
  Call Number Admin @ si @ AGL2013 Serial 2794  
Permanent link to this record
 

 
Author Cristhian A. Aguilera-Carrasco; F. Aguilera; Angel Sappa; C. Aguilera; Ricardo Toledo edit   pdf
doi  openurl
  Title Learning cross-spectral similarity measures with deep convolutional neural networks Type Conference Article
  Year 2016 Publication 29th IEEE Conference on Computer Vision and Pattern Recognition Worshops Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract The simultaneous use of images from different spectracan be helpful to improve the performance of many computer vision tasks. The core idea behind the usage of crossspectral approaches is to take advantage of the strengths of each spectral band providing a richer representation of a scene, which cannot be obtained with just images from one spectral band. In this work we tackle the cross-spectral image similarity problem by using Convolutional Neural Networks (CNNs). We explore three different CNN architectures to compare the similarity of cross-spectral image patches. Specifically, we train each network with images from the visible and the near-infrared spectrum, and then test the result with two public cross-spectral datasets. Experimental results show that CNN approaches outperform the current state-of-art on both cross-spectral datasets. Additionally, our experiments show that some CNN architectures are capable of generalizing between different crossspectral domains.  
  Address Las vegas; USA; June 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title (up)  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPRW  
  Notes ADAS; 600.086; 600.076 Approved no  
  Call Number Admin @ si @AAS2016 Serial 2809  
Permanent link to this record
 

 
Author Daniel Hernandez; Lukas Schneider; Antonio Espinosa; David Vazquez; Antonio Lopez; Uwe Franke; Marc Pollefeys; Juan C. Moure edit   pdf
openurl 
  Title Slanted Stixels: Representing San Francisco's Steepest Streets} Type Conference Article
  Year 2017 Publication 28th British Machine Vision Conference Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract In this work we present a novel compact scene representation based on Stixels that infers geometric and semantic information. Our approach overcomes the previous rather restrictive geometric assumptions for Stixels by introducing a novel depth model to account for non-flat roads and slanted objects. Both semantic and depth cues are used jointly to infer the scene representation in a sound global energy minimization formulation. Furthermore, a novel approximation scheme is introduced that uses an extremely efficient over-segmentation. In doing so, the computational complexity of the Stixel inference algorithm is reduced significantly, achieving real-time computation capabilities with only a slight drop in accuracy. We evaluate the proposed approach in terms of semantic and geometric accuracy as well as run-time on four publicly available benchmark datasets. Our approach maintains accuracy on flat road scene datasets while improving substantially on a novel non-flat road dataset.  
  Address London; uk; September 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title (up)  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference BMVC  
  Notes ADAS; 600.118 Approved no  
  Call Number ADAS @ adas @ HSE2017a Serial 2945  
Permanent link to this record
 

 
Author Ishaan Gulrajani; Kundan Kumar; Faruk Ahmed; Adrien Ali Taiga; Francesco Visin; David Vazquez; Aaron Courville edit   pdf
url  openurl
  Title PixelVAE: A Latent Variable Model for Natural Images Type Conference Article
  Year 2017 Publication 5th International Conference on Learning Representations Abbreviated Journal  
  Volume Issue Pages  
  Keywords Deep Learning; Unsupervised Learning  
  Abstract Natural image modeling is a landmark challenge of unsupervised learning. Variational Autoencoders (VAEs) learn a useful latent representation and generate samples that preserve global structure but tend to suffer from image blurriness. PixelCNNs model sharp contours and details very well, but lack an explicit latent representation and have difficulty modeling large-scale structure in a computationally efficient way. In this paper, we present PixelVAE, a VAE model with an autoregressive decoder based on PixelCNN. The resulting architecture achieves state-of-the-art log-likelihood on binarized MNIST. We extend PixelVAE to a hierarchy of multiple latent variables at different scales; this hierarchical model achieves competitive likelihood on 64x64 ImageNet and generates high-quality samples on LSUN bedrooms.  
  Address Toulon; France; April 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title (up)  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICLR  
  Notes ADAS; 600.085; 600.076; 601.281; 600.118 Approved no  
  Call Number ADAS @ adas @ GKA2017 Serial 2815  
Permanent link to this record
 

 
Author Vassileios Balntas; Edgar Riba; Daniel Ponsa; Krystian Mikolajczyk edit   pdf
openurl 
  Title Learning local feature descriptors with triplets and shallow convolutional neural networks Type Conference Article
  Year 2016 Publication 27th British Machine Vision Conference Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract It has recently been demonstrated that local feature descriptors based on convolutional neural networks (CNN) can significantly improve the matching performance. Previous work on learning such descriptors has focused on exploiting pairs of positive and negative patches to learn discriminative CNN representations. In this work, we propose to utilize triplets of training samples, together with in-triplet mining of hard negatives.
We show that our method achieves state of the art results, without the computational overhead typically associated with mining of negatives and with lower complexity of the network architecture. We compare our approach to recently introduced convolutional local feature descriptors, and demonstrate the advantages of the proposed methods in terms of performance and speed. We also examine different loss functions associated with triplets.
 
  Address York; UK; September 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title (up)  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference BMVC  
  Notes ADAS; 600.086 Approved no  
  Call Number Admin @ si @ BRP2016 Serial 2818  
Permanent link to this record
 

 
Author Simon Jégou; Michal Drozdzal; David Vazquez; Adriana Romero; Yoshua Bengio edit   pdf
url  doi
openurl 
  Title The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation Type Conference Article
  Year 2017 Publication IEEE Conference on Computer Vision and Pattern Recognition Workshops Abbreviated Journal  
  Volume Issue Pages  
  Keywords Semantic Segmentation  
  Abstract State-of-the-art approaches for semantic image segmentation are built on Convolutional Neural Networks (CNNs). The typical segmentation architecture is composed of (a) a downsampling path responsible for extracting coarse semantic features, followed by (b) an upsampling path trained to recover the input image resolution at the output of the model and, optionally, (c) a post-processing module (e.g. Conditional Random Fields) to refine the model predictions.

Recently, a new CNN architecture, Densely Connected Convolutional Networks (DenseNets), has shown excellent results on image classification tasks. The idea of DenseNets is based on the observation that if each layer is directly connected to every other layer in a feed-forward fashion then the network will be more accurate and easier to train.

In this paper, we extend DenseNets to deal with the problem of semantic segmentation. We achieve state-of-the-art results on urban scene benchmark datasets such as CamVid and Gatech, without any further post-processing module nor pretraining. Moreover, due to smart construction of the model, our approach has much less parameters than currently published best entries for these datasets.
 
  Address Honolulu; USA; July 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title (up)  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPRW  
  Notes MILAB; ADAS; 600.076; 600.085; 601.281 Approved no  
  Call Number ADAS @ adas @ JDV2016 Serial 2866  
Permanent link to this record
 

 
Author Youssef El Rhabi; Simon Loic; Brun Luc; Josep Llados; Felipe Lumbreras edit  doi
openurl 
  Title Information Theoretic Rotationwise Robust Binary Descriptor Learning Type Conference Article
  Year 2016 Publication Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR) Abbreviated Journal  
  Volume Issue Pages 368-378  
  Keywords  
  Abstract In this paper, we propose a new data-driven approach for binary descriptor selection. In order to draw a clear analysis of common designs, we present a general information-theoretic selection paradigm. It encompasses several standard binary descriptor construction schemes, including a recent state-of-the-art one named BOLD. We pursue the same endeavor to increase the stability of the produced descriptors with respect to rotations. To achieve this goal, we have designed a novel offline selection criterion which is better adapted to the online matching procedure. The effectiveness of our approach is demonstrated on two standard datasets, where our descriptor is compared to BOLD and to several classical descriptors. In particular, it emerges that our approach can reproduce equivalent if not better performance as BOLD while relying on twice shorter descriptors. Such an improvement can be influential for real-time applications.  
  Address Mérida; Mexico; November 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title (up)  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference S+SSPR  
  Notes DAG; ADAS; 600.097; 600.086 Approved no  
  Call Number Admin @ si @ RLL2016 Serial 2871  
Permanent link to this record
 

 
Author Daniel Hernandez; Antonio Espinosa; David Vazquez; Antonio Lopez; Juan Carlos Moure edit   pdf
openurl 
  Title Embedded Real-time Stixel Computation Type Conference Article
  Year 2017 Publication GPU Technology Conference Abbreviated Journal  
  Volume Issue Pages  
  Keywords GPU; CUDA; Stixels; Autonomous Driving  
  Abstract  
  Address Silicon Valley; USA; May 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title (up)  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference GTC  
  Notes ADAS; 600.118 Approved no  
  Call Number ADAS @ adas @ HEV2017a Serial 2879  
Permanent link to this record
 

 
Author David Vazquez; Jorge Bernal; F. Javier Sanchez; Gloria Fernandez Esparrach; Antonio Lopez; Adriana Romero; Michal Drozdzal; Aaron Courville edit   pdf
openurl 
  Title A Benchmark for Endoluminal Scene Segmentation of Colonoscopy Images Type Conference Article
  Year 2017 Publication 31st International Congress and Exhibition on Computer Assisted Radiology and Surgery Abbreviated Journal  
  Volume Issue Pages  
  Keywords Deep Learning; Medical Imaging  
  Abstract Colorectal cancer (CRC) is the third cause of cancer death worldwide. Currently, the standard approach to reduce CRC-related mortality is to perform regular screening in search for polyps and colonoscopy is the screening tool of choice. The main limitations of this screening procedure are polyp miss-rate and inability to perform visual assessment of polyp malignancy. These drawbacks can be reduced by designing Decision Support Systems (DSS) aiming to help clinicians in the different stages of the procedure by providing endoluminal scene segmentation. Thus, in this paper, we introduce an extended benchmark of colonoscopy image, with the hope of establishing a new strong benchmark for colonoscopy image analysis research. We provide new baselines on this dataset by training standard fully convolutional networks (FCN) for semantic segmentation and significantly outperforming, without any further post-processing, prior results in endoluminal scene segmentation.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title (up)  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CARS  
  Notes ADAS; MV; 600.075; 600.085; 600.076; 601.281; 600.118 Approved no  
  Call Number ADAS @ adas @ VBS2017a Serial 2880  
Permanent link to this record
 

 
Author Victor Vaquero; German Ros; Francesc Moreno-Noguer; Antonio Lopez; Alberto Sanfeliu edit   pdf
doi  openurl
  Title Joint coarse-and-fine reasoning for deep optical flow Type Conference Article
  Year 2017 Publication 24th International Conference on Image Processing Abbreviated Journal  
  Volume Issue Pages 2558-2562  
  Keywords  
  Abstract We propose a novel representation for dense pixel-wise estimation tasks using CNNs that boosts accuracy and reduces training time, by explicitly exploiting joint coarse-and-fine reasoning. The coarse reasoning is performed over a discrete classification space to obtain a general rough solution, while the fine details of the solution are obtained over a continuous regression space. In our approach both components are jointly estimated, which proved to be beneficial for improving estimation accuracy. Additionally, we propose a new network architecture, which combines coarse and fine components by treating the fine estimation as a refinement built on top of the coarse solution, and therefore adding details to the general prediction. We apply our approach to the challenging problem of optical flow estimation and empirically validate it against state-of-the-art CNN-based solutions trained from scratch and tested on large optical flow datasets.  
  Address Beijing; China; September 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title (up)  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICIP  
  Notes ADAS; 600.118 Approved no  
  Call Number Admin @ si @ VRM2017 Serial 2898  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: