toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Daniel Hernandez; Antonio Espinosa; David Vazquez; Antonio Lopez; Juan Carlos Moure edit   pdf
url  doi
openurl 
  Title GPU-accelerated real-time stixel computation Type Conference Article
  Year 2017 Publication IEEE Winter Conference on Applications of Computer Vision Abbreviated Journal  
  Volume Issue Pages 1054-1062  
  Keywords Autonomous Driving; GPU; Stixel  
  Abstract The Stixel World is a medium-level, compact representation of road scenes that abstracts millions of disparity pixels into hundreds or thousands of stixels. The goal of this work is to implement and evaluate a complete multi-stixel estimation pipeline on an embedded, energyefficient, GPU-accelerated device. This work presents a full GPU-accelerated implementation of stixel estimation that produces reliable results at 26 frames per second (real-time) on the Tegra X1 for disparity images of 1024×440 pixels and stixel widths of 5 pixels, and achieves more than 400 frames per second on a high-end Titan X GPU card.  
  Address Santa Rosa; CA; USA; March 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language (up) Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference WACV  
  Notes ADAS; 600.118 Approved no  
  Call Number ADAS @ adas @ HEV2017b Serial 2812  
Permanent link to this record
 

 
Author Muhammad Anwer Rao; Fahad Shahbaz Khan; Joost Van de Weijer; Jorma Laaksonen edit   pdf
doi  openurl
  Title Combining Holistic and Part-based Deep Representations for Computational Painting Categorization Type Conference Article
  Year 2016 Publication 6th International Conference on Multimedia Retrieval Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Automatic analysis of visual art, such as paintings, is a challenging inter-disciplinary research problem. Conventional approaches only rely on global scene characteristics by encoding holistic information for computational painting categorization.We argue that such approaches are sub-optimal and that discriminative common visual structures provide complementary information for painting classification. We present an approach that encodes both the global scene layout and discriminative latent common structures for computational painting categorization. The region of interests are automatically extracted, without any manual part labeling, by training class-specific deformable part-based models. Both holistic and region-of-interests are then described using multi-scale dense convolutional features. These features are pooled separately using Fisher vector encoding and concatenated afterwards in a single image representation. Experiments are performed on a challenging dataset with 91 different painters and 13 diverse painting styles. Our approach outperforms the standard method, which only employs the global scene characteristics. Furthermore, our method achieves state-of-the-art results outperforming a recent multi-scale deep features based approach [11] by 6.4% and 3.8% respectively on artist and style classification.  
  Address New York; USA; June 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language (up) Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICMR  
  Notes LAMP; 600.068; 600.079;ADAS Approved no  
  Call Number Admin @ si @ RKW2016 Serial 2763  
Permanent link to this record
 

 
Author Jose Manuel Alvarez; Theo Gevers; Antonio Lopez edit  url
doi  openurl
  Title Evaluating Color Representation for Online Road Detection Type Conference Article
  Year 2013 Publication ICCV Workshop on Computer Vision in Vehicle Technology: From Earth to Mars Abbreviated Journal  
  Volume Issue Pages 594-595  
  Keywords  
  Abstract Detecting traversable road areas ahead a moving vehicle is a key process for modern autonomous driving systems. Most existing algorithms use color to classify pixels as road or background. These algorithms reduce the effect of lighting variations and weather conditions by exploiting the discriminant/invariant properties of different color representations. However, up to date, no comparison between these representations have been conducted. Therefore, in this paper, we perform an evaluation of existing color representations for road detection. More specifically, we focus on color planes derived from RGB data and their most com-
mon combinations. The evaluation is done on a set of 7000 road images acquired
using an on-board camera in different real-driving situations.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language (up) Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVVT:E2M  
  Notes ADAS;ISE Approved no  
  Call Number Admin @ si @ AGL2013 Serial 2794  
Permanent link to this record
 

 
Author Cristhian A. Aguilera-Carrasco; F. Aguilera; Angel Sappa; C. Aguilera; Ricardo Toledo edit   pdf
doi  openurl
  Title Learning cross-spectral similarity measures with deep convolutional neural networks Type Conference Article
  Year 2016 Publication 29th IEEE Conference on Computer Vision and Pattern Recognition Worshops Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract The simultaneous use of images from different spectracan be helpful to improve the performance of many computer vision tasks. The core idea behind the usage of crossspectral approaches is to take advantage of the strengths of each spectral band providing a richer representation of a scene, which cannot be obtained with just images from one spectral band. In this work we tackle the cross-spectral image similarity problem by using Convolutional Neural Networks (CNNs). We explore three different CNN architectures to compare the similarity of cross-spectral image patches. Specifically, we train each network with images from the visible and the near-infrared spectrum, and then test the result with two public cross-spectral datasets. Experimental results show that CNN approaches outperform the current state-of-art on both cross-spectral datasets. Additionally, our experiments show that some CNN architectures are capable of generalizing between different crossspectral domains.  
  Address Las vegas; USA; June 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language (up) Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPRW  
  Notes ADAS; 600.086; 600.076 Approved no  
  Call Number Admin @ si @AAS2016 Serial 2809  
Permanent link to this record
 

 
Author Daniel Hernandez; Lukas Schneider; Antonio Espinosa; David Vazquez; Antonio Lopez; Uwe Franke; Marc Pollefeys; Juan C. Moure edit   pdf
openurl 
  Title Slanted Stixels: Representing San Francisco's Steepest Streets} Type Conference Article
  Year 2017 Publication 28th British Machine Vision Conference Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract In this work we present a novel compact scene representation based on Stixels that infers geometric and semantic information. Our approach overcomes the previous rather restrictive geometric assumptions for Stixels by introducing a novel depth model to account for non-flat roads and slanted objects. Both semantic and depth cues are used jointly to infer the scene representation in a sound global energy minimization formulation. Furthermore, a novel approximation scheme is introduced that uses an extremely efficient over-segmentation. In doing so, the computational complexity of the Stixel inference algorithm is reduced significantly, achieving real-time computation capabilities with only a slight drop in accuracy. We evaluate the proposed approach in terms of semantic and geometric accuracy as well as run-time on four publicly available benchmark datasets. Our approach maintains accuracy on flat road scene datasets while improving substantially on a novel non-flat road dataset.  
  Address London; uk; September 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language (up) Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference BMVC  
  Notes ADAS; 600.118 Approved no  
  Call Number ADAS @ adas @ HSE2017a Serial 2945  
Permanent link to this record
 

 
Author Ishaan Gulrajani; Kundan Kumar; Faruk Ahmed; Adrien Ali Taiga; Francesco Visin; David Vazquez; Aaron Courville edit   pdf
url  openurl
  Title PixelVAE: A Latent Variable Model for Natural Images Type Conference Article
  Year 2017 Publication 5th International Conference on Learning Representations Abbreviated Journal  
  Volume Issue Pages  
  Keywords Deep Learning; Unsupervised Learning  
  Abstract Natural image modeling is a landmark challenge of unsupervised learning. Variational Autoencoders (VAEs) learn a useful latent representation and generate samples that preserve global structure but tend to suffer from image blurriness. PixelCNNs model sharp contours and details very well, but lack an explicit latent representation and have difficulty modeling large-scale structure in a computationally efficient way. In this paper, we present PixelVAE, a VAE model with an autoregressive decoder based on PixelCNN. The resulting architecture achieves state-of-the-art log-likelihood on binarized MNIST. We extend PixelVAE to a hierarchy of multiple latent variables at different scales; this hierarchical model achieves competitive likelihood on 64x64 ImageNet and generates high-quality samples on LSUN bedrooms.  
  Address Toulon; France; April 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language (up) Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICLR  
  Notes ADAS; 600.085; 600.076; 601.281; 600.118 Approved no  
  Call Number ADAS @ adas @ GKA2017 Serial 2815  
Permanent link to this record
 

 
Author Vassileios Balntas; Edgar Riba; Daniel Ponsa; Krystian Mikolajczyk edit   pdf
openurl 
  Title Learning local feature descriptors with triplets and shallow convolutional neural networks Type Conference Article
  Year 2016 Publication 27th British Machine Vision Conference Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract It has recently been demonstrated that local feature descriptors based on convolutional neural networks (CNN) can significantly improve the matching performance. Previous work on learning such descriptors has focused on exploiting pairs of positive and negative patches to learn discriminative CNN representations. In this work, we propose to utilize triplets of training samples, together with in-triplet mining of hard negatives.
We show that our method achieves state of the art results, without the computational overhead typically associated with mining of negatives and with lower complexity of the network architecture. We compare our approach to recently introduced convolutional local feature descriptors, and demonstrate the advantages of the proposed methods in terms of performance and speed. We also examine different loss functions associated with triplets.
 
  Address York; UK; September 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language (up) Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference BMVC  
  Notes ADAS; 600.086 Approved no  
  Call Number Admin @ si @ BRP2016 Serial 2818  
Permanent link to this record
 

 
Author Cesar de Souza; Adrien Gaidon; Eleonora Vig; Antonio Lopez edit   pdf
doi  openurl
  Title Sympathy for the Details: Dense Trajectories and Hybrid Classification Architectures for Action Recognition Type Conference Article
  Year 2016 Publication 14th European Conference on Computer Vision Abbreviated Journal  
  Volume Issue Pages 697-716  
  Keywords  
  Abstract Action recognition in videos is a challenging task due to the complexity of the spatio-temporal patterns to model and the difficulty to acquire and learn on large quantities of video data. Deep learning, although a breakthrough for image classification and showing promise for videos, has still not clearly superseded action recognition methods using hand-crafted features, even when training on massive datasets. In this paper, we introduce hybrid video classification architectures based on carefully designed unsupervised representations of hand-crafted spatio-temporal features classified by supervised deep networks. As we show in our experiments on five popular benchmarks for action recognition, our hybrid model combines the best of both worlds: it is data efficient (trained on 150 to 10000 short clips) and yet improves significantly on the state of the art, including recent deep models trained on millions of manually labelled images and videos.  
  Address Amsterdam; The Netherlands; October 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language (up) Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ECCV  
  Notes ADAS; 600.076; 600.085 Approved no  
  Call Number Admin @ si @ SGV2016 Serial 2824  
Permanent link to this record
 

 
Author Aura Hernandez-Sabate; Lluis Albarracin; Daniel Calvo; Nuria Gorgorio edit   pdf
openurl 
  Title EyeMath: Identifying Mathematics Problem Solving Processes in a RTS Video Game Type Conference Article
  Year 2016 Publication 5th International Conference Games and Learning Alliance Abbreviated Journal  
  Volume 10056 Issue Pages 50-59  
  Keywords Simulation environment; Automated Driving; Driver-Vehicle interaction  
  Abstract Photorealistic virtual environments are crucial for developing and testing automated driving systems in a safe way during trials. As commercially available simulators are expensive and bulky, this paper presents a low-cost, extendable, and easy-to-use (LEE) virtual environment with the aim to highlight its utility for level 3 driving automation. In particular, an experiment is performed using the presented simulator to explore the influence of different variables regarding control transfer of the car after the system was driving autonomously in a highway scenario. The results show that the speed of the car at the time when the system needs to transfer the control to the human driver is critical.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language (up) Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference GALA  
  Notes ADAS;IAM; Approved no  
  Call Number HAC2016 Serial 2864  
Permanent link to this record
 

 
Author Saad Minhas; Aura Hernandez-Sabate; Shoaib Ehsan; Katerine Diaz; Ales Leonardis; Antonio Lopez; Klaus McDonald Maier edit   pdf
openurl 
  Title LEE: A photorealistic Virtual Environment for Assessing Driver-Vehicle Interactions in Self-Driving Mode Type Conference Article
  Year 2016 Publication 14th European Conference on Computer Vision Workshops Abbreviated Journal  
  Volume 9915 Issue Pages 894-900  
  Keywords Simulation environment; Automated Driving; Driver-Vehicle interaction  
  Abstract Photorealistic virtual environments are crucial for developing and testing automated driving systems in a safe way during trials. As commercially available simulators are expensive and bulky, this paper presents a low-cost, extendable, and easy-to-use (LEE) virtual environment with the aim to highlight its utility for level 3 driving automation. In particular, an experiment is performed using the presented simulator to explore the influence of different variables regarding control transfer of the car after the system was driving autonomously in a highway scenario. The results show that the speed of the car at the time when the system needs to transfer the control to the human driver is critical.  
  Address Amsterdam; The Netherlands; October 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language (up) Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ECCVW  
  Notes ADAS;IAM; 600.085; 600.076 Approved no  
  Call Number MHE2016 Serial 2865  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: