toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Pau Riba; Alicia Fornes; Josep Llados edit  isbn
openurl 
  Title Towards the Alignment of Handwritten Music Scores Type Conference Article
  Year 2015 Publication 11th IAPR International Workshop on Graphics Recognition Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract It is very common to find different versions of the same music work in archives of Opera Theaters. These differences correspond to modifications and annotations from the musicians. From the musicologist point of view, these variations are very interesting and deserve study. This paper explores the alignment of music scores as a tool for automatically detecting the passages that contain such differences. Given the difficulties in the recognition of handwritten music scores, our goal is to align the music scores and at the same time, avoid the recognition of music elements as much as possible. After removing the staff lines, braces and ties, the bar lines are detected. Then, the bar units are described as a whole using the Blurred Shape Model. The bar units alignment is performed by using Dynamic Time Warping. The analysis of the alignment path is used to detect the variations in the music scores. The method has been evaluated on a subset of the CVC-MUSCIMA dataset, showing encouraging results.  
  Address Nancy; France; August 2015  
  Corporate Author Thesis  
  Publisher Springer International Publishing Place of Publication Editor Bart Lamiroy; Rafael Dueire Lins  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-3-319-52158-9 Medium  
  Area Expedition Conference GREC  
  Notes DAG Approved no  
  Call Number (down) Admin @ si @ Serial 2874  
Permanent link to this record
 

 
Author Marçal Rusiñol; Dimosthenis Karatzas; Josep Llados edit  doi
openurl 
  Title Automatic Verification of Properly Signed Multi-page Document Images Type Conference Article
  Year 2015 Publication Proceedings of the Eleventh International Symposium on Visual Computing Abbreviated Journal  
  Volume 9475 Issue Pages 327-336  
  Keywords Document Image; Manual Inspection; Signature Verification; Rejection Criterion; Document Flow  
  Abstract In this paper we present an industrial application for the automatic screening of incoming multi-page documents in a banking workflow aimed at determining whether these documents are properly signed or not. The proposed method is divided in three main steps. First individual pages are classified in order to identify the pages that should contain a signature. In a second step, we segment within those key pages the location where the signatures should appear. The last step checks whether the signatures are present or not. Our method is tested in a real large-scale environment and we report the results when checking two different types of real multi-page contracts, having in total more than 14,500 pages.  
  Address Las Vegas, Nevada, USA; December 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume 9475 Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ISVC  
  Notes DAG; 600.077 Approved no  
  Call Number (down) Admin @ si @ Serial 3189  
Permanent link to this record
 

 
Author Sergio Silva; Victor Campmany; Laura Sellart; Juan Carlos Moure; Antoni Espinosa; David Vazquez; Antonio Lopez edit   pdf
openurl 
  Title Autonomous GPU-based Driving Type Abstract
  Year 2015 Publication Programming and Tunning Massive Parallel Systems Abbreviated Journal PUMPS  
  Volume Issue Pages  
  Keywords Autonomous Driving; ADAS; CUDA  
  Abstract Human factors cause most driving accidents; this is why nowadays is common to hear about autonomous driving as an alternative. Autonomous driving will not only increase safety, but also will develop a system of cooperative self-driving cars that will reduce pollution and congestion. Furthermore, it will provide more freedom to handicapped people, elderly or kids.

Autonomous Driving requires perceiving and understanding the vehicle environment (e.g., road, traffic signs, pedestrians, vehicles) using sensors (e.g., cameras, lidars, sonars, and radars), selflocalization (requiring GPS, inertial sensors and visual localization in precise maps), controlling the vehicle and planning the routes. These algorithms require high computation capability, and thanks to NVIDIA GPU acceleration this starts to become feasible.

NVIDIA® is developing a new platform for boosting the Autonomous Driving capabilities that is able of managing the vehicle via CAN-Bus: the Drive™ PX. It has 8 ARM cores with dual accelerated Tegra® X1 chips. It has 12 synchronized camera inputs for 360º vehicle perception, 4G and Wi-Fi capabilities allowing vehicle communications and GPS and inertial sensors inputs for self-localization.

Our research group has been selected for testing Drive™ PX. Accordingly, we are developing a Drive™ PX based autonomous car. Currently, we are porting our previous CPU based algorithms (e.g., Lane Departure Warning, Collision Warning, Automatic Cruise Control, Pedestrian Protection, or Semantic Segmentation) for running in the GPU.
 
  Address Barcelona; Spain  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference PUMPS  
  Notes ADAS; 600.076; 600.082; 600.085 Approved no  
  Call Number (down) ADAS @ adas @ SCS2015 Serial 2645  
Permanent link to this record
 

 
Author German Ros; Sebastian Ramos; Manuel Granados; Amir Bakhtiary; David Vazquez; Antonio Lopez edit   pdf
url  doi
openurl 
  Title Vision-based Offline-Online Perception Paradigm for Autonomous Driving Type Conference Article
  Year 2015 Publication IEEE Winter Conference on Applications of Computer Vision Abbreviated Journal  
  Volume Issue Pages 231 - 238  
  Keywords Autonomous Driving; Scene Understanding; SLAM; Semantic Segmentation  
  Abstract Autonomous driving is a key factor for future mobility. Properly perceiving the environment of the vehicles is essential for a safe driving, which requires computing accurate geometric and semantic information in real-time. In this paper, we challenge state-of-the-art computer vision algorithms for building a perception system for autonomous driving. An inherent drawback in the computation of visual semantics is the trade-off between accuracy and computational cost. We propose to circumvent this problem by following an offline-online strategy. During the offline stage dense 3D semantic maps are created. In the online stage the current driving area is recognized in the maps via a re-localization process, which allows to retrieve the pre-computed accurate semantics and 3D geometry in realtime. Then, detecting the dynamic obstacles we obtain a rich understanding of the current scene. We evaluate quantitatively our proposal in the KITTI dataset and discuss the related open challenges for the computer vision community.  
  Address Hawaii; January 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area ACDC Expedition Conference WACV  
  Notes ADAS; 600.076 Approved no  
  Call Number (down) ADAS @ adas @ RRG2015 Serial 2499  
Permanent link to this record
 

 
Author Alejandro Gonzalez Alzate; Gabriel Villalonga; Jiaolong Xu; David Vazquez; Jaume Amores; Antonio Lopez edit   pdf
url  doi
openurl 
  Title Multiview Random Forest of Local Experts Combining RGB and LIDAR data for Pedestrian Detection Type Conference Article
  Year 2015 Publication IEEE Intelligent Vehicles Symposium IV2015 Abbreviated Journal  
  Volume Issue Pages 356-361  
  Keywords Pedestrian Detection  
  Abstract Despite recent significant advances, pedestrian detection continues to be an extremely challenging problem in real scenarios. In order to develop a detector that successfully operates under these conditions, it becomes critical to leverage upon multiple cues, multiple imaging modalities and a strong multi-view classifier that accounts for different pedestrian views and poses. In this paper we provide an extensive evaluation that gives insight into how each of these aspects (multi-cue, multimodality and strong multi-view classifier) affect performance both individually and when integrated together. In the multimodality component we explore the fusion of RGB and depth maps obtained by high-definition LIDAR, a type of modality that is only recently starting to receive attention. As our analysis reveals, although all the aforementioned aspects significantly help in improving the performance, the fusion of visible spectrum and depth information allows to boost the accuracy by a much larger margin. The resulting detector not only ranks among the top best performers in the challenging KITTI benchmark, but it is built upon very simple blocks that are easy to implement and computationally efficient. These simple blocks can be easily replaced with more sophisticated ones recently proposed, such as the use of convolutional neural networks for feature representation, to further improve the accuracy.  
  Address Seoul; Corea; June 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area ACDC Expedition Conference IV  
  Notes ADAS; 600.076; 600.057; 600.054 Approved no  
  Call Number (down) ADAS @ adas @ GVX2015 Serial 2625  
Permanent link to this record
 

 
Author Alejandro Gonzalez Alzate; Gabriel Villalonga; German Ros; David Vazquez; Antonio Lopez edit   pdf
doi  openurl
  Title 3D-Guided Multiscale Sliding Window for Pedestrian Detection Type Conference Article
  Year 2015 Publication Pattern Recognition and Image Analysis, Proceedings of 7th Iberian Conference , ibPRIA 2015 Abbreviated Journal  
  Volume 9117 Issue Pages 560-568  
  Keywords Pedestrian Detection  
  Abstract The most relevant modules of a pedestrian detector are the candidate generation and the candidate classification. The former aims at presenting image windows to the latter so that they are classified as containing a pedestrian or not. Much attention has being paid to the classification module, while candidate generation has mainly relied on (multiscale) sliding window pyramid. However, candidate generation is critical for achieving real-time. In this paper we assume a context of autonomous driving based on stereo vision. Accordingly, we evaluate the effect of taking into account the 3D information (derived from the stereo) in order to prune the hundred of thousands windows per image generated by classical pyramidal sliding window. For our study we use a multimodal (RGB, disparity) and multi-descriptor (HOG, LBP, HOG+LBP) holistic ensemble based on linear SVM. Evaluation on data from the challenging KITTI benchmark suite shows the effectiveness of using 3D information to dramatically reduce the number of candidate windows, even improving the overall pedestrian detection accuracy.  
  Address Santiago de Compostela; España; June 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area ACDC Expedition Conference IbPRIA  
  Notes ADAS; 600.076; 600.057; 600.054 Approved no  
  Call Number (down) ADAS @ adas @ GVR2015 Serial 2585  
Permanent link to this record
 

 
Author Victor Campmany; Sergio Silva; Juan Carlos Moure; Antoni Espinosa; David Vazquez; Antonio Lopez edit   pdf
openurl 
  Title GPU-based pedestrian detection for autonomous driving Type Abstract
  Year 2015 Publication Programming and Tunning Massive Parallel Systems Abbreviated Journal PUMPS  
  Volume Issue Pages  
  Keywords Autonomous Driving; ADAS; CUDA; Pedestrian Detection  
  Abstract Pedestrian detection for autonomous driving has gained a lot of prominence during the last few years. Besides the fact that it is one of the hardest tasks within computer vision, it involves huge computational costs. The real-time constraints in the field are tight, and regular processors are not able to handle the workload obtaining an acceptable ratio of frames per second (fps). Moreover, multiple cameras are required to obtain accurate results, so the need to speed up the process is even higher. Taking the work in [1] as our baseline, we propose a CUDA implementation of a pedestrian detection system. Further, we introduce significant algorithmic adjustments and optimizations to adapt the problem to the GPU architecture. The aim is to provide a system capable of running in real-time obtaining reliable results.  
  Address Barcelona; Spain  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title PUMPS  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference PUMPS  
  Notes ADAS; 600.076; 600.082; 600.085 Approved no  
  Call Number (down) ADAS @ adas @ CSM2015 Serial 2644  
Permanent link to this record
 

 
Author Xavier Baro; Jordi Gonzalez; Junior Fabian; Miguel Angel Bautista; Marc Oliu; Hugo Jair Escalante; Isabelle Guyon; Sergio Escalera edit  doi
openurl 
  Title ChaLearn Looking at People 2015 challenges: action spotting and cultural event recognition Type Conference Article
  Year 2015 Publication 2015 IEEE Conference on Computer Vision and Pattern Recognition Worshops (CVPRW) Abbreviated Journal  
  Volume Issue Pages 1-9  
  Keywords  
  Abstract Following previous series on Looking at People (LAP) challenges [6, 5, 4], ChaLearn ran two competitions to be presented at CVPR 2015: action/interaction spotting and cultural event recognition in RGB data. We ran a second round on human activity recognition on RGB data sequences. In terms of cultural event recognition, tens of categories have to be recognized. This involves scene understanding and human analysis. This paper summarizes the two performed challenges and obtained results. Details of the ChaLearn LAP competitions can be found at http://gesture.chalearn.org/.  
  Address Boston; EEUU; June 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPRW  
  Notes HuPBA;MV Approved no  
  Call Number (down) Serial 2652  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: