toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Cesar de Souza; Adrien Gaidon; Eleonora Vig; Antonio Lopez edit   pdf
doi  openurl
  Title (up) Sympathy for the Details: Dense Trajectories and Hybrid Classification Architectures for Action Recognition Type Conference Article
  Year 2016 Publication 14th European Conference on Computer Vision Abbreviated Journal  
  Volume Issue Pages 697-716  
  Keywords  
  Abstract Action recognition in videos is a challenging task due to the complexity of the spatio-temporal patterns to model and the difficulty to acquire and learn on large quantities of video data. Deep learning, although a breakthrough for image classification and showing promise for videos, has still not clearly superseded action recognition methods using hand-crafted features, even when training on massive datasets. In this paper, we introduce hybrid video classification architectures based on carefully designed unsupervised representations of hand-crafted spatio-temporal features classified by supervised deep networks. As we show in our experiments on five popular benchmarks for action recognition, our hybrid model combines the best of both worlds: it is data efficient (trained on 150 to 10000 short clips) and yet improves significantly on the state of the art, including recent deep models trained on millions of manually labelled images and videos.  
  Address Amsterdam; The Netherlands; October 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ECCV  
  Notes ADAS; 600.076; 600.085 Approved no  
  Call Number Admin @ si @ SGV2016 Serial 2824  
Permanent link to this record
 

 
Author Joan Serrat; Ferran Diego; Felipe Lumbreras; Jose Manuel Alvarez edit  openurl
  Title (up) Synchronization of Video Sequences from Free-moving Cameras Type Conference Article
  Year 2007 Publication 3rd Iberian Conference on Pattern Recognition and Image Analysis Abbreviated Journal  
  Volume 4477 Issue Pages 620–627  
  Keywords  
  Abstract  
  Address Girona (Spain)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor J. Marti et al.  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference IbPRIA  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ SDL2007 Serial 880  
Permanent link to this record
 

 
Author Javad Zolfaghari Bengar; Abel Gonzalez-Garcia; Gabriel Villalonga; Bogdan Raducanu; Hamed H. Aghdam; Mikhail Mozerov; Antonio Lopez; Joost Van de Weijer edit   pdf
url  doi
openurl 
  Title (up) Temporal Coherence for Active Learning in Videos Type Conference Article
  Year 2019 Publication IEEE International Conference on Computer Vision Workshops Abbreviated Journal  
  Volume Issue Pages 914-923  
  Keywords  
  Abstract Autonomous driving systems require huge amounts of data to train. Manual annotation of this data is time-consuming and prohibitively expensive since it involves human resources. Therefore, active learning emerged as an alternative to ease this effort and to make data annotation more manageable. In this paper, we introduce a novel active learning approach for object detection in videos by exploiting temporal coherence. Our active learning criterion is based on the estimated number of errors in terms of false positives and false negatives. The detections obtained by the object detector are used to define the nodes of a graph and tracked forward and backward to temporally link the nodes. Minimizing an energy function defined on this graphical model provides estimates of both false positives and false negatives. Additionally, we introduce a synthetic video dataset, called SYNTHIA-AL, specially designed to evaluate active learning for video object detection in road scenes. Finally, we show that our approach outperforms active learning baselines tested on two datasets.  
  Address Seul; Corea; October 2019  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICCVW  
  Notes LAMP; ADAS; 600.124; 602.200; 600.118; 600.120; 600.141 Approved no  
  Call Number Admin @ si @ ZGV2019 Serial 3294  
Permanent link to this record
 

 
Author Arnau Ramisa; David Aldavert; Shrihari Vasudevan; Ricardo Toledo; Ramon Lopez de Mantaras edit  url
openurl 
  Title (up) The IIIA30 MObile Robot Object Recognition Datset Type Conference Article
  Year 2011 Publication 11th Portuguese Robotics Open Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Object perception is a key feature in order to make mobile robots able to perform high-level tasks. However, research aimed at addressing the constraints and limitations encountered in a mobile robotics scenario, like low image resolution, motion blur or tight computational constraints, is still very scarce. In order to facilitate future research in this direction, in this work we present an object detection and recognition dataset acquired using a mobile robotic platform. As a baseline for the dataset, we evaluated the cascade of weak classifiers object detection method from Viola and Jones.  
  Address Lisboa  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference Robotica  
  Notes RV;ADAS Approved no  
  Call Number Admin @ si @ RAV2011 Serial 1777  
Permanent link to this record
 

 
Author Simon Jégou; Michal Drozdzal; David Vazquez; Adriana Romero; Yoshua Bengio edit   pdf
url  doi
openurl 
  Title (up) The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation Type Conference Article
  Year 2017 Publication IEEE Conference on Computer Vision and Pattern Recognition Workshops Abbreviated Journal  
  Volume Issue Pages  
  Keywords Semantic Segmentation  
  Abstract State-of-the-art approaches for semantic image segmentation are built on Convolutional Neural Networks (CNNs). The typical segmentation architecture is composed of (a) a downsampling path responsible for extracting coarse semantic features, followed by (b) an upsampling path trained to recover the input image resolution at the output of the model and, optionally, (c) a post-processing module (e.g. Conditional Random Fields) to refine the model predictions.

Recently, a new CNN architecture, Densely Connected Convolutional Networks (DenseNets), has shown excellent results on image classification tasks. The idea of DenseNets is based on the observation that if each layer is directly connected to every other layer in a feed-forward fashion then the network will be more accurate and easier to train.

In this paper, we extend DenseNets to deal with the problem of semantic segmentation. We achieve state-of-the-art results on urban scene benchmark datasets such as CamVid and Gatech, without any further post-processing module nor pretraining. Moreover, due to smart construction of the model, our approach has much less parameters than currently published best entries for these datasets.
 
  Address Honolulu; USA; July 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPRW  
  Notes MILAB; ADAS; 600.076; 600.085; 601.281 Approved no  
  Call Number ADAS @ adas @ JDV2016 Serial 2866  
Permanent link to this record
 

 
Author German Ros; Laura Sellart; Joanna Materzynska; David Vazquez; Antonio Lopez edit   pdf
doi  openurl
  Title (up) The SYNTHIA Dataset: A Large Collection of Synthetic Images for Semantic Segmentation of Urban Scenes Type Conference Article
  Year 2016 Publication 29th IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 3234-3243  
  Keywords Domain Adaptation; Autonomous Driving; Virtual Data; Semantic Segmentation  
  Abstract Vision-based semantic segmentation in urban scenarios is a key functionality for autonomous driving. The irruption of deep convolutional neural networks (DCNNs) allows to foresee obtaining reliable classifiers to perform such a visual task. However, DCNNs require to learn many parameters from raw images; thus, having a sufficient amount of diversified images with this class annotations is needed. These annotations are obtained by a human cumbersome labour specially challenging for semantic segmentation, since pixel-level annotations are required. In this paper, we propose to use a virtual world for automatically generating realistic synthetic images with pixel-level annotations. Then, we address the question of how useful can be such data for the task of semantic segmentation; in particular, when using a DCNN paradigm. In order to answer this question we have generated a synthetic diversified collection of urban images, named SynthCity, with automatically generated class annotations. We use SynthCity in combination with publicly available real-world urban images with manually provided annotations. Then, we conduct experiments on a DCNN setting that show how the inclusion of SynthCity in the training stage significantly improves the performance of the semantic segmentation task  
  Address Las Vegas; USA; June 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPR  
  Notes ADAS; 600.085; 600.082; 600.076 Approved no  
  Call Number ADAS @ adas @ RSM2016 Serial 2739  
Permanent link to this record
 

 
Author Cristina Cañero; Petia Radeva; Oriol Pujol; Ricardo Toledo; Debora Gil; J. Saludes; Juan J. Villanueva; B. Garcia del Blanco; J. Mauri; Eduard Fernandez-Nofrerias; J.A. Gomez-Hospital; E. Iraculis; J. Comin; C. Quiles; F. Jara; A. Cequier; E.Esplugas edit   pdf
openurl 
  Title (up) Three-dimensional reconstruction and quantification of the coronary tree using intravascular ultrasound images Type Conference Article
  Year 1999 Publication Proceedings of International Conference on Computer in Cardiology (CIC´99) Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract In this paper we propose a new Computer Vision technique to reconstruct the vascular wall in space using a deformable model-based technique and compounding methods, based in biplane angiography and intravascular ultrasound data jicsion. It is also proposed a generalpurpose three-dimensional guided interpolation method. The three dimensional centerline of the vessel is reconstructed from geometrically corrected biplane angiographies using automatic segmentation methods and snakes. The IVUS image planes are located in the threedimensional space and correctly oriented. A led interpolation method based in B-SurJaces and snakes isused to fill the gaps among image planes  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CINC99  
  Notes MILAB;RV;IAM;ADAS;HuPBA Approved no  
  Call Number IAM @ iam @ CRP1999b Serial 1492  
Permanent link to this record
 

 
Author Marçal Rusiñol; David Aldavert; Ricardo Toledo; Josep Llados edit   pdf
doi  openurl
  Title (up) Towards Query-by-Speech Handwritten Keyword Spotting Type Conference Article
  Year 2015 Publication 13th International Conference on Document Analysis and Recognition ICDAR2015 Abbreviated Journal  
  Volume Issue Pages 501-505  
  Keywords  
  Abstract In this paper, we present a new querying paradigm for handwritten keyword spotting. We propose to represent handwritten word images both by visual and audio representations, enabling a query-by-speech keyword spotting system. The two representations are merged together and projected to a common sub-space in the training phase. This transform allows to, given a spoken query, retrieve word instances that were only represented by the visual modality. In addition, the same method can be used backwards at no additional cost to produce a handwritten text-tospeech system. We present our first results on this new querying mechanism using synthetic voices over the George Washington
dataset.
 
  Address Nancy; France; August 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICDAR  
  Notes DAG; 600.084; 600.061; 601.223; 600.077;ADAS Approved no  
  Call Number Admin @ si @ RAT2015b Serial 2682  
Permanent link to this record
 

 
Author Jiaolong Xu; Peng Wang; Heng Yang; Antonio Lopez edit   pdf
url  doi
openurl 
  Title (up) Training a Binary Weight Object Detector by Knowledge Transfer for Autonomous Driving Type Conference Article
  Year 2019 Publication IEEE International Conference on Robotics and Automation Abbreviated Journal  
  Volume Issue Pages 2379-2384  
  Keywords  
  Abstract Autonomous driving has harsh requirements of small model size and energy efficiency, in order to enable the embedded system to achieve real-time on-board object detection. Recent deep convolutional neural network based object detectors have achieved state-of-the-art accuracy. However, such models are trained with numerous parameters and their high computational costs and large storage prohibit the deployment to memory and computation resource limited systems. Low-precision neural networks are popular techniques for reducing the computation requirements and memory footprint. Among them, binary weight neural network (BWN) is the extreme case which quantizes the float-point into just bit. BWNs are difficult to train and suffer from accuracy deprecation due to the extreme low-bit representation. To address this problem, we propose a knowledge transfer (KT) method to aid the training of BWN using a full-precision teacher network. We built DarkNet-and MobileNet-based binary weight YOLO-v2 detectors and conduct experiments on KITTI benchmark for car, pedestrian and cyclist detection. The experimental results show that the proposed method maintains high detection accuracy while reducing the model size of DarkNet-YOLO from 257 MB to 8.8 MB and MobileNet-YOLO from 193 MB to 7.9 MB.  
  Address Montreal; Canada; May 2019  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICRA  
  Notes ADAS; 600.124; 600.116; 600.118 Approved no  
  Call Number Admin @ si @ XWY2018 Serial 3182  
Permanent link to this record
 

 
Author Judit Martinez; Eva Costa; P. Herreros; Antonio Lopez; Juan J. Villanueva edit  doi
openurl 
  Title (up) TV-Screen Quality Inspection by Artificial Vision Type Conference Article
  Year 2003 Publication Proceedings SPIE 5132, Sixth International Conference on Quality Control by Artificial Vision (QCAV 2003) Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract A real-time vision system for TV screen quality inspection is introduced. The whole system consists of eight cameras and one processor per camera. It acquires and processes 112 images in 6 seconds. The defects to be inspected can be grouped into four main categories (bubble, line-out, line reduction and landing) although there exists a large variability among each particular type of defect. The complexity of the whole inspection process has been reduced by dividing images into smaller ones and grouping the defects into frequency and intensity relevant ones. Tools such as mathematical morphology, Fourier transform, profile analysis and classification have been used. The performance of the system has been successfully proved against human operators in normal production conditions.  
  Address Gatlinburg, (EEUU)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ MCH2003a Serial 393  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: