toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Felipe Codevilla; Matthias Muller; Antonio Lopez; Vladlen Koltun; Alexey Dosovitskiy edit   pdf
doi  openurl
  Title End-to-end Driving via Conditional Imitation Learning Type Conference Article
  Year 2018 Publication (up) IEEE International Conference on Robotics and Automation Abbreviated Journal  
  Volume Issue Pages 4693 - 4700  
  Keywords  
  Abstract Deep networks trained on demonstrations of human driving have learned to follow roads and avoid obstacles. However, driving policies trained via imitation learning cannot be controlled at test time. A vehicle trained end-to-end to imitate an expert cannot be guided to take a specific turn at an upcoming intersection. This limits the utility of such systems. We propose to condition imitation learning on high-level command input. At test time, the learned driving policy functions as a chauffeur that handles sensorimotor coordination but continues to respond to navigational commands. We evaluate different architectures for conditional imitation learning in vision-based driving. We conduct experiments in realistic three-dimensional simulations of urban driving and on a 1/5 scale robotic truck that is trained to drive in a residential area. Both systems drive based on visual input yet remain responsive to high-level navigational commands. The supplementary video can be viewed at this https URL  
  Address Brisbane; Australia; May 2018  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICRA  
  Notes ADAS; 600.116; 600.124; 600.118 Approved no  
  Call Number Admin @ si @ CML2018 Serial 3108  
Permanent link to this record
 

 
Author Jiaolong Xu; Peng Wang; Heng Yang; Antonio Lopez edit   pdf
url  doi
openurl 
  Title Training a Binary Weight Object Detector by Knowledge Transfer for Autonomous Driving Type Conference Article
  Year 2019 Publication (up) IEEE International Conference on Robotics and Automation Abbreviated Journal  
  Volume Issue Pages 2379-2384  
  Keywords  
  Abstract Autonomous driving has harsh requirements of small model size and energy efficiency, in order to enable the embedded system to achieve real-time on-board object detection. Recent deep convolutional neural network based object detectors have achieved state-of-the-art accuracy. However, such models are trained with numerous parameters and their high computational costs and large storage prohibit the deployment to memory and computation resource limited systems. Low-precision neural networks are popular techniques for reducing the computation requirements and memory footprint. Among them, binary weight neural network (BWN) is the extreme case which quantizes the float-point into just bit. BWNs are difficult to train and suffer from accuracy deprecation due to the extreme low-bit representation. To address this problem, we propose a knowledge transfer (KT) method to aid the training of BWN using a full-precision teacher network. We built DarkNet-and MobileNet-based binary weight YOLO-v2 detectors and conduct experiments on KITTI benchmark for car, pedestrian and cyclist detection. The experimental results show that the proposed method maintains high detection accuracy while reducing the model size of DarkNet-YOLO from 257 MB to 8.8 MB and MobileNet-YOLO from 193 MB to 7.9 MB.  
  Address Montreal; Canada; May 2019  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICRA  
  Notes ADAS; 600.124; 600.116; 600.118 Approved no  
  Call Number Admin @ si @ XWY2018 Serial 3182  
Permanent link to this record
 

 
Author Arnau Ramisa; Adriana Tapus; Ramon Lopez de Mantaras; Ricardo Toledo edit  openurl
  Title Mobile Robot Localization using Panoramic Vision and Combination of Feature Region Detectors Type Conference Article
  Year 2008 Publication (up) IEEE International Conference on Robotics and Automation, Abbreviated Journal  
  Volume Issue Pages 538–543  
  Keywords  
  Abstract  
  Address Pasadena; CA; USA  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICRA  
  Notes RV;ADAS Approved no  
  Call Number Admin @ si @ RTL2008 Serial 1144  
Permanent link to this record
 

 
Author Patricia Suarez; Angel Sappa; Boris X. Vintimilla edit   pdf
doi  openurl
  Title Cross-Spectral Image Patch Similarity using Convolutional Neural Network Type Conference Article
  Year 2017 Publication (up) IEEE International Workshop of Electronics, Control, Measurement, Signals and their application to Mechatronics Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract The ability to compare image regions (patches) has been the basis of many approaches to core computer vision problems, including object, texture and scene categorization. Hence, developing representations for image patches have been of interest in several works. The current work focuses on learning similarity between cross-spectral image patches with a 2 channel convolutional neural network (CNN) model. The proposed approach is an adaptation of a previous work, trying to obtain similar results than the state of the art but with a lowcost hardware. Hence, obtained results are compared with both
classical approaches, showing improvements, and a state of the art CNN based approach.
 
  Address San Sebastian; Spain; May 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ECMSM  
  Notes ADAS; 600.086; 600.118 Approved no  
  Call Number Admin @ si @ SSV2017a Serial 2916  
Permanent link to this record
 

 
Author Angel Valencia; Roger Idrovo; Angel Sappa; Douglas Plaza; Daniel Ochoa edit   pdf
openurl 
  Title A 3D Vision Based Approach for Optimal Grasp of Vacuum Grippers Type Conference Article
  Year 2017 Publication (up) IEEE International Workshop of Electronics, Control, Measurement, Signals and their application to Mechatronics Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract In general, robot grasping approaches are based on the usage of multi-finger grippers. However, when large size objects need to be manipulated vacuum grippers are preferred, instead of finger based grippers. This paper aims to estimate the best picking place for a two suction cups vacuum gripper,
when planar objects with an unknown size and geometry are considered. The approach is based on the estimation of geometric properties of object’s shape from a partial cloud of points (a single 3D view), in such a way that combine with considerations of a theoretical model to generate an optimal contact point
that minimizes the vacuum force needed to guarantee a grasp.
Experimental results in real scenarios are presented to show the validity of the proposed approach.
 
  Address San Sebastian; Spain; May 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ECMSM  
  Notes ADAS; 600.086; 600.118 Approved no  
  Call Number Admin @ si @ VIS2017 Serial 2917  
Permanent link to this record
 

 
Author German Ros; Sebastian Ramos; Manuel Granados; Amir Bakhtiary; David Vazquez; Antonio Lopez edit   pdf
url  doi
openurl 
  Title Vision-based Offline-Online Perception Paradigm for Autonomous Driving Type Conference Article
  Year 2015 Publication (up) IEEE Winter Conference on Applications of Computer Vision Abbreviated Journal  
  Volume Issue Pages 231 - 238  
  Keywords Autonomous Driving; Scene Understanding; SLAM; Semantic Segmentation  
  Abstract Autonomous driving is a key factor for future mobility. Properly perceiving the environment of the vehicles is essential for a safe driving, which requires computing accurate geometric and semantic information in real-time. In this paper, we challenge state-of-the-art computer vision algorithms for building a perception system for autonomous driving. An inherent drawback in the computation of visual semantics is the trade-off between accuracy and computational cost. We propose to circumvent this problem by following an offline-online strategy. During the offline stage dense 3D semantic maps are created. In the online stage the current driving area is recognized in the maps via a re-localization process, which allows to retrieve the pre-computed accurate semantics and 3D geometry in realtime. Then, detecting the dynamic obstacles we obtain a rich understanding of the current scene. We evaluate quantitatively our proposal in the KITTI dataset and discuss the related open challenges for the computer vision community.  
  Address Hawaii; January 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area ACDC Expedition Conference WACV  
  Notes ADAS; 600.076 Approved no  
  Call Number ADAS @ adas @ RRG2015 Serial 2499  
Permanent link to this record
 

 
Author Daniel Hernandez; Antonio Espinosa; David Vazquez; Antonio Lopez; Juan Carlos Moure edit   pdf
url  doi
openurl 
  Title GPU-accelerated real-time stixel computation Type Conference Article
  Year 2017 Publication (up) IEEE Winter Conference on Applications of Computer Vision Abbreviated Journal  
  Volume Issue Pages 1054-1062  
  Keywords Autonomous Driving; GPU; Stixel  
  Abstract The Stixel World is a medium-level, compact representation of road scenes that abstracts millions of disparity pixels into hundreds or thousands of stixels. The goal of this work is to implement and evaluate a complete multi-stixel estimation pipeline on an embedded, energyefficient, GPU-accelerated device. This work presents a full GPU-accelerated implementation of stixel estimation that produces reliable results at 26 frames per second (real-time) on the Tegra X1 for disparity images of 1024×440 pixels and stixel widths of 5 pixels, and achieves more than 400 frames per second on a high-end Titan X GPU card.  
  Address Santa Rosa; CA; USA; March 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference WACV  
  Notes ADAS; 600.118 Approved no  
  Call Number ADAS @ adas @ HEV2017b Serial 2812  
Permanent link to this record
 

 
Author Idoia Ruiz; Lorenzo Porzi; Samuel Rota Bulo; Peter Kontschieder; Joan Serrat edit   pdf
openurl 
  Title Weakly Supervised Multi-Object Tracking and Segmentation Type Conference Article
  Year 2021 Publication (up) IEEE Winter Conference on Applications of Computer Vision Workshops Abbreviated Journal  
  Volume Issue Pages 125-133  
  Keywords  
  Abstract We introduce the problem of weakly supervised MultiObject Tracking and Segmentation, i.e. joint weakly supervised instance segmentation and multi-object tracking, in which we do not provide any kind of mask annotation.
To address it, we design a novel synergistic training strategy by taking advantage of multi-task learning, i.e. classification and tracking tasks guide the training of the unsupervised instance segmentation. For that purpose, we extract weak foreground localization information, provided by
Grad-CAM heatmaps, to generate a partial ground truth to learn from. Additionally, RGB image level information is employed to refine the mask prediction at the edges of the
objects. We evaluate our method on KITTI MOTS, the most representative benchmark for this task, reducing the performance gap on the MOTSP metric between the fully supervised and weakly supervised approach to just 12% and 12.7 % for cars and pedestrians, respectively.
 
  Address Virtual; January 2021  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference WACVW  
  Notes ADAS; 600.118; 600.124 Approved no  
  Call Number Admin @ si @ RPR2021 Serial 3548  
Permanent link to this record
 

 
Author German Ros; Angel Sappa; Daniel Ponsa; Antonio Lopez edit   pdf
openurl 
  Title Visual SLAM for Driverless Cars: A Brief Survey Type Conference Article
  Year 2012 Publication (up) IEEE Workshop on Navigation, Perception, Accurate Positioning and Mapping for Intelligent Vehicles Abbreviated Journal  
  Volume Issue Pages  
  Keywords SLAM  
  Abstract  
  Address Alcalá de Henares  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference IVW  
  Notes ADAS Approved no  
  Call Number Admin @ si @ RSP2012; ADAS @ adas Serial 2019  
Permanent link to this record
 

 
Author Fahad Shahbaz Khan; Muhammad Anwer Rao; Joost Van de Weijer; Michael Felsberg; J.Laaksonen edit  url
doi  isbn
openurl 
  Title Deep semantic pyramids for human attributes and action recognition Type Conference Article
  Year 2015 Publication (up) Image Analysis, Proceedings of 19th Scandinavian Conference , SCIA 2015 Abbreviated Journal  
  Volume 9127 Issue Pages 341-353  
  Keywords Action recognition; Human attributes; Semantic pyramids  
  Abstract Describing persons and their actions is a challenging problem due to variations in pose, scale and viewpoint in real-world images. Recently, semantic pyramids approach [1] for pose normalization has shown to provide excellent results for gender and action recognition. The performance of semantic pyramids approach relies on robust image description and is therefore limited due to the use of shallow local features. In the context of object recognition [2] and object detection [3], convolutional neural networks (CNNs) or deep features have shown to improve the performance over the conventional shallow features.
We propose deep semantic pyramids for human attributes and action recognition. The method works by constructing spatial pyramids based on CNNs of different part locations. These pyramids are then combined to obtain a single semantic representation. We validate our approach on the Berkeley and 27 Human Attributes datasets for attributes classification. For action recognition, we perform experiments on two challenging datasets: Willow and PASCAL VOC 2010. The proposed deep semantic pyramids provide a significant gain of 17.2%, 13.9%, 24.3% and 22.6% compared to the standard shallow semantic pyramids on Berkeley, 27 Human Attributes, Willow and PASCAL VOC 2010 datasets respectively. Our results also show that deep semantic pyramids outperform conventional CNNs based on the full bounding box of the person. Finally, we compare our approach with state-of-the-art methods and show a gain in performance compared to best methods in literature.
 
  Address Denmark; Copenhagen; June 2015  
  Corporate Author Thesis  
  Publisher Springer International Publishing Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-319-19664-0 Medium  
  Area Expedition Conference SCIA  
  Notes LAMP; 600.068; 600.079;ADAS Approved no  
  Call Number Admin @ si @ KRW2015b Serial 2672  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: