toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Youssef El Rhabi; Simon Loic; Brun Luc; Josep Llados; Felipe Lumbreras edit  doi
openurl 
  Title (up) Information Theoretic Rotationwise Robust Binary Descriptor Learning Type Conference Article
  Year 2016 Publication Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR) Abbreviated Journal  
  Volume Issue Pages 368-378  
  Keywords  
  Abstract In this paper, we propose a new data-driven approach for binary descriptor selection. In order to draw a clear analysis of common designs, we present a general information-theoretic selection paradigm. It encompasses several standard binary descriptor construction schemes, including a recent state-of-the-art one named BOLD. We pursue the same endeavor to increase the stability of the produced descriptors with respect to rotations. To achieve this goal, we have designed a novel offline selection criterion which is better adapted to the online matching procedure. The effectiveness of our approach is demonstrated on two standard datasets, where our descriptor is compared to BOLD and to several classical descriptors. In particular, it emerges that our approach can reproduce equivalent if not better performance as BOLD while relying on twice shorter descriptors. Such an improvement can be influential for real-time applications.  
  Address Mérida; Mexico; November 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference S+SSPR  
  Notes DAG; ADAS; 600.097; 600.086 Approved no  
  Call Number Admin @ si @ RLL2016 Serial 2871  
Permanent link to this record
 

 
Author Patricia Suarez; Angel Sappa; Boris X. Vintimilla edit   pdf
doi  openurl
  Title (up) Infrared Image Colorization based on a Triplet DCGAN Architecture Type Conference Article
  Year 2017 Publication IEEE Conference on Computer Vision and Pattern Recognition Workshops Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract This paper proposes a novel approach for colorizing near infrared (NIR) images using Deep Convolutional Generative Adversarial Network (GAN) architectures. The proposed approach is based on the usage of a triplet model for learning each color channel independently, in a more homogeneous way. It allows a fast convergence during the training, obtaining a greater similarity between the given NIR image and the corresponding ground truth. The proposed approach has been evaluated with a large data set of NIR images and compared with a recent approach, which is also based on a GAN architecture but in this case all the
color channels are obtained at the same time.
 
  Address Honolulu; Hawaii; USA; July 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPRW  
  Notes ADAS; 600.086; 600.118 Approved no  
  Call Number Admin @ si @ SSV2017b Serial 2920  
Permanent link to this record
 

 
Author David Aldavert; Marçal Rusiñol; Ricardo Toledo; Josep Llados edit   pdf
doi  openurl
  Title (up) Integrating Visual and Textual Cues for Query-by-String Word Spotting Type Conference Article
  Year 2013 Publication 12th International Conference on Document Analysis and Recognition Abbreviated Journal  
  Volume Issue Pages 511 - 515  
  Keywords  
  Abstract In this paper, we present a word spotting framework that follows the query-by-string paradigm where word images are represented both by textual and visual representations. The textual representation is formulated in terms of character $n$-grams while the visual one is based on the bag-of-visual-words scheme. These two representations are merged together and projected to a sub-vector space. This transform allows to, given a textual query, retrieve word instances that were only represented by the visual modality. Moreover, this statistical representation can be used together with state-of-the-art indexation structures in order to deal with large-scale scenarios. The proposed method is evaluated using a collection of historical documents outperforming state-of-the-art performances.  
  Address Washington; USA; August 2013  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1520-5363 ISBN Medium  
  Area Expedition Conference ICDAR  
  Notes DAG; ADAS; 600.045; 600.055; 600.061 Approved no  
  Call Number Admin @ si @ ART2013 Serial 2224  
Permanent link to this record
 

 
Author Marçal Rusiñol; David Aldavert; Dimosthenis Karatzas; Ricardo Toledo; Josep Llados edit  doi
isbn  openurl
  Title (up) Interactive Trademark Image Retrieval by Fusing Semantic and Visual Content. Advances in Information Retrieval Type Conference Article
  Year 2011 Publication 33rd European Conference on Information Retrieval Abbreviated Journal  
  Volume 6611 Issue Pages 314-325  
  Keywords  
  Abstract In this paper we propose an efficient queried-by-example retrieval system which is able to retrieve trademark images by similarity from patent and trademark offices' digital libraries. Logo images are described by both their semantic content, by means of the Vienna codes, and their visual contents, by using shape and color as visual cues. The trademark descriptors are then indexed by a locality-sensitive hashing data structure aiming to perform approximate k-NN search in high dimensional spaces in sub-linear time. The resulting ranked lists are combined by using the Condorcet method and a relevance feedback step helps to iteratively revise the query and refine the obtained results. The experiments demonstrate the effectiveness and efficiency of this system on a realistic and large dataset.  
  Address Dublin, Ireland  
  Corporate Author Thesis  
  Publisher Springer Place of Publication Berlin Editor P. Clough; C. Foley; C. Gurrin; G.J.F. Jones; W. Kraaij; H. Lee; V. Murdoch  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-3-642-20160-8 Medium  
  Area Expedition Conference ECIR  
  Notes DAG; RV;ADAS Approved no  
  Call Number Admin @ si @ RAK2011 Serial 1737  
Permanent link to this record
 

 
Author Guim Perarnau; Joost Van de Weijer; Bogdan Raducanu; Jose Manuel Alvarez edit   pdf
openurl 
  Title (up) Invertible conditional gans for image editing Type Conference Article
  Year 2016 Publication 30th Annual Conference on Neural Information Processing Systems Worshops Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Generative Adversarial Networks (GANs) have recently demonstrated to successfully approximate complex data distributions. A relevant extension of this model is conditional GANs (cGANs), where the introduction of external information allows to determine specific representations of the generated images. In this work, we evaluate encoders to inverse the mapping of a cGAN, i.e., mapping a real image into a latent space and a conditional representation. This allows, for example, to reconstruct and modify real images of faces conditioning on arbitrary attributes.
Additionally, we evaluate the design of cGANs. The combination of an encoder
with a cGAN, which we call Invertible cGAN (IcGAN), enables to re-generate real
images with deterministic complex modifications.
 
  Address Barcelona; Spain; December 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference NIPSW  
  Notes LAMP; ADAS; 600.068 Approved no  
  Call Number Admin @ si @ PWR2016 Serial 2906  
Permanent link to this record
 

 
Author Zhijie Fang; Antonio Lopez edit   pdf
url  doi
openurl 
  Title (up) Is the Pedestrian going to Cross? Answering by 2D Pose Estimation Type Conference Article
  Year 2018 Publication IEEE Intelligent Vehicles Symposium Abbreviated Journal  
  Volume Issue Pages 1271 - 1276  
  Keywords  
  Abstract Our recent work suggests that, thanks to nowadays powerful CNNs, image-based 2D pose estimation is a promising cue for determining pedestrian intentions such as crossing the road in the path of the ego-vehicle, stopping before entering the road, and starting to walk or bending towards the road. This statement is based on the results obtained on non-naturalistic sequences (Daimler dataset), i.e. in sequences choreographed specifically for performing the study. Fortunately, a new publicly available dataset (JAAD) has appeared recently to allow developing methods for detecting pedestrian intentions in naturalistic driving conditions; more specifically, for addressing the relevant question is the pedestrian going to cross? Accordingly, in this paper we use JAAD to assess the usefulness of 2D pose estimation for answering such a question. We combine CNN-based pedestrian detection, tracking and pose estimation to predict the crossing action from monocular images. Overall, the proposed pipeline provides new state-ofthe-art results.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference IV  
  Notes ADAS; 600.124; 600.116; 600.118 Approved no  
  Call Number Admin @ si @ FaL2018 Serial 3181  
Permanent link to this record
 

 
Author Victor Vaquero; German Ros; Francesc Moreno-Noguer; Antonio Lopez; Alberto Sanfeliu edit   pdf
doi  openurl
  Title (up) Joint coarse-and-fine reasoning for deep optical flow Type Conference Article
  Year 2017 Publication 24th International Conference on Image Processing Abbreviated Journal  
  Volume Issue Pages 2558-2562  
  Keywords  
  Abstract We propose a novel representation for dense pixel-wise estimation tasks using CNNs that boosts accuracy and reduces training time, by explicitly exploiting joint coarse-and-fine reasoning. The coarse reasoning is performed over a discrete classification space to obtain a general rough solution, while the fine details of the solution are obtained over a continuous regression space. In our approach both components are jointly estimated, which proved to be beneficial for improving estimation accuracy. Additionally, we propose a new network architecture, which combines coarse and fine components by treating the fine estimation as a refinement built on top of the coarse solution, and therefore adding details to the general prediction. We apply our approach to the challenging problem of optical flow estimation and empirically validate it against state-of-the-art CNN-based solutions trained from scratch and tested on large optical flow datasets.  
  Address Beijing; China; September 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICIP  
  Notes ADAS; 600.118 Approved no  
  Call Number Admin @ si @ VRM2017 Serial 2898  
Permanent link to this record
 

 
Author Naveen Onkarappa; Angel Sappa edit  doi
isbn  openurl
  Title (up) Laplacian Derivative based Regularization for Optical Flow Estimation in Driving Scenario Type Conference Article
  Year 2013 Publication 15th International Conference on Computer Analysis of Images and Patterns Abbreviated Journal  
  Volume 8048 Issue Pages 483-490  
  Keywords Optical flow; regularization; Driver Assistance Systems; Performance Evaluation  
  Abstract Existing state of the art optical flow approaches, which are evaluated on standard datasets such as Middlebury, not necessarily have a similar performance when evaluated on driving scenarios. This drop on performance is due to several challenges arising on real scenarios during driving. Towards this direction, in this paper, we propose a modification to the regularization term in a variational optical flow formulation, that notably improves the results, specially in driving scenarios. The proposed modification consists on using the Laplacian derivatives of flow components in the regularization term instead of gradients of flow components. We show the improvements in results on a standard real image sequences dataset (KITTI).  
  Address York; UK; August 2013  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-642-40245-6 Medium  
  Area Expedition Conference CAIP  
  Notes ADAS; 600.055; 601.215 Approved no  
  Call Number Admin @ si @ OnS2013b Serial 2244  
Permanent link to this record
 

 
Author Jiaolong Xu; David Vazquez; Antonio Lopez; Javier Marin; Daniel Ponsa edit   pdf
doi  isbn
openurl 
  Title (up) Learning a Multiview Part-based Model in Virtual World for Pedestrian Detection Type Conference Article
  Year 2013 Publication IEEE Intelligent Vehicles Symposium Abbreviated Journal  
  Volume Issue Pages 467 - 472  
  Keywords Pedestrian Detection; Virtual World; Part based  
  Abstract State-of-the-art deformable part-based models based on latent SVM have shown excellent results on human detection. In this paper, we propose to train a multiview deformable part-based model with automatically generated part examples from virtual-world data. The method is efficient as: (i) the part detectors are trained with precisely extracted virtual examples, thus no latent learning is needed, (ii) the multiview pedestrian detector enhances the performance of the pedestrian root model, (iii) a top-down approach is used for part detection which reduces the searching space. We evaluate our model on Daimler and Karlsruhe Pedestrian Benchmarks with publicly available Caltech pedestrian detection evaluation framework and the result outperforms the state-of-the-art latent SVM V4.0, on both average miss rate and speed (our detector is ten times faster).  
  Address Gold Coast; Australia; June 2013  
  Corporate Author Thesis  
  Publisher IEEE Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1931-0587 ISBN 978-1-4673-2754-1 Medium  
  Area Expedition Conference IV  
  Notes ADAS; 600.054; 600.057 Approved no  
  Call Number XVL2013; ADAS @ adas @ xvl2013a Serial 2214  
Permanent link to this record
 

 
Author Javier Marin; David Vazquez; David Geronimo; Antonio Lopez edit   pdf
doi  isbn
openurl 
  Title (up) Learning Appearance in Virtual Scenarios for Pedestrian Detection Type Conference Article
  Year 2010 Publication 23rd IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 137–144  
  Keywords Pedestrian Detection; Domain Adaptation  
  Abstract Detecting pedestrians in images is a key functionality to avoid vehicle-to-pedestrian collisions. The most promising detectors rely on appearance-based pedestrian classifiers trained with labelled samples. This paper addresses the following question: can a pedestrian appearance model learnt in virtual scenarios work successfully for pedestrian detection in real images? (Fig. 1). Our experiments suggest a positive answer, which is a new and relevant conclusion for research in pedestrian detection. More specifically, we record training sequences in virtual scenarios and then appearance-based pedestrian classifiers are learnt using HOG and linear SVM. We test such classifiers in a publicly available dataset provided by Daimler AG for pedestrian detection benchmarking. This dataset contains real world images acquired from a moving car. The obtained result is compared with the one given by a classifier learnt using samples coming from real images. The comparison reveals that, although virtual samples were not specially selected, both virtual and real based training give rise to classifiers of similar performance.  
  Address San Francisco; CA; USA; June 2010  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language English Summary Language English Original Title Learning Appearance in Virtual Scenarios for Pedestrian Detection  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1063-6919 ISBN 978-1-4244-6984-0 Medium  
  Area Expedition Conference CVPR  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ MVG2010 Serial 1304  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: