|   | 
Details
   web
Records
Author Leonardo Galteri; Dena Bazazian; Lorenzo Seidenari; Marco Bertini; Andrew Bagdanov; Anguelos Nicolaou; Dimosthenis Karatzas; Alberto del Bimbo
Title Reading Text in the Wild from Compressed Images Type Conference Article
Year 2017 Publication 1st International workshop on Egocentric Perception, Interaction and Computing Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Reading text in the wild is gaining attention in the computer vision community. Images captured in the wild are almost always compressed to varying degrees, depending on application context, and this compression introduces artifacts
that distort image content into the captured images. In this paper we investigate the impact these compression artifacts have on text localization and recognition in the wild. We also propose a deep Convolutional Neural Network (CNN) that can eliminate text-specific compression artifacts and which leads to an improvement in text recognition. Experimental results on the ICDAR-Challenge4 dataset demonstrate that compression artifacts have a significant
impact on text localization and recognition and that our approach yields an improvement in both – especially at high compression rates.
Address Venice; Italy; October 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (down) ICCV - EPIC
Notes DAG; 600.084; 600.121 Approved no
Call Number Admin @ si @ GBS2017 Serial 3006
Permanent link to this record
 

 
Author Alejandro Cartas; Mariella Dimiccoli; Petia Radeva
Title Batch-based activity recognition from egocentric photo-streams Type Conference Article
Year 2017 Publication 1st International workshop on Egocentric Perception, Interaction and Computing Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Activity recognition from long unstructured egocentric photo-streams has several applications in assistive technology such as health monitoring and frailty detection, just to name a few. However, one of its main technical challenges is to deal with the low frame rate of wearable photo-cameras, which causes abrupt appearance changes between consecutive frames. In consequence, important discriminatory low-level features from motion such as optical flow cannot be estimated. In this paper, we present a batch-driven approach for training a deep learning architecture that strongly rely on Long short-term units to tackle this problem. We propose two different implementations of the same approach that process a photo-stream sequence using batches of fixed size with the goal of capturing the temporal evolution of high-level features. The main difference between these implementations is that one explicitly models consecutive batches by overlapping them. Experimental results over a public dataset acquired by three users demonstrate the validity of the proposed architectures to exploit the temporal evolution of convolutional features over time without relying on event boundaries.
Address Venice; Italy; October 2017;
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (down) ICCV - EPIC
Notes MILAB; no menciona Approved no
Call Number Admin @ si @ CDR2017 Serial 3023
Permanent link to this record
 

 
Author Fahad Shahbaz Khan; Joost Van de Weijer; Maria Vanrell
Title Top-Down Color Attention for Object Recognition Type Conference Article
Year 2009 Publication 12th International Conference on Computer Vision Abbreviated Journal
Volume Issue Pages 979 - 986
Keywords
Abstract Generally the bag-of-words based image representation follows a bottom-up paradigm. The subsequent stages of the process: feature detection, feature description, vocabulary construction and image representation are performed independent of the intentioned object classes to be detected. In such a framework, combining multiple cues such as shape and color often provides below-expected results. This paper presents a novel method for recognizing object categories when using multiple cues by separating the shape and color cue. Color is used to guide attention by means of a top-down category-specific attention map. The color attention map is then further deployed to modulate the shape features by taking more features from regions within an image that are likely to contain an object instance. This procedure leads to a category-specific image histogram representation for each category. Furthermore, we argue that the method combines the advantages of both early and late fusion. We compare our approach with existing methods that combine color and shape cues on three data sets containing varied importance of both cues, namely, Soccer ( color predominance), Flower (color and shape parity), and PASCAL VOC Challenge 2007 (shape predominance). The experiments clearly demonstrate that in all three data sets our proposed framework significantly outperforms the state-of-the-art methods for combining color and shape information.
Address Kyoto, Japan
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1550-5499 ISBN 978-1-4244-4420-5 Medium
Area Expedition Conference (down) ICCV
Notes CIC Approved no
Call Number CAT @ cat @ SWV2009 Serial 1196
Permanent link to this record
 

 
Author Ivan Huerta; Michael Holte; Thomas B. Moeslund; Jordi Gonzalez
Title Detection and Removal of Chromatic Moving Shadows in Surveillance Scenarios Type Conference Article
Year 2009 Publication 12th International Conference on Computer Vision Abbreviated Journal
Volume Issue Pages 1499 - 1506
Keywords
Abstract Segmentation in the surveillance domain has to deal with shadows to avoid distortions when detecting moving objects. Most segmentation approaches dealing with shadow detection are typically restricted to penumbra shadows. Therefore, such techniques cannot cope well with umbra shadows. Consequently, umbra shadows are usually detected as part of moving objects. In this paper we present a novel technique based on gradient and colour models for separating chromatic moving cast shadows from detected moving objects. Firstly, both a chromatic invariant colour cone model and an invariant gradient model are built to perform automatic segmentation while detecting potential shadows. In a second step, regions corresponding to potential shadows are grouped by considering “a bluish effect” and an edge partitioning. Lastly, (i) temporal similarities between textures and (ii) spatial similarities between chrominance angle and brightness distortions are analysed for all potential shadow regions in order to finally identify umbra shadows. Unlike other approaches, our method does not make any a-priori assumptions about camera location, surface geometries, surface textures, shapes and types of shadows, objects, and background. Experimental results show the performance and accuracy of our approach in different shadowed materials and illumination conditions.
Address Kyoto, Japan
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1550-5499 ISBN 978-1-4244-4420-5 Medium
Area Expedition Conference (down) ICCV
Notes Approved no
Call Number ISE @ ise @ HHM2009 Serial 1213
Permanent link to this record
 

 
Author Gemma Roig; Xavier Boix; Fernando De la Torre
Title Optimal Feature Selection for Subspace Image Matching Type Conference Article
Year 2009 Publication 2nd IEEE International Workshop on Subspace Methods in conjunction Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Image matching has been a central research topic in computer vision over the last decades. Typical approaches to correspondence involve matching feature points between images. In this paper, we present a novel problem for establishing correspondences between a sparse set of image features and a previously learned subspace model. We formulate the matching task as an energy minimization, and jointly optimize over all possible feature assignments and parameters of the subspace model. This problem is in general NP-hard. We propose a convex relaxation approximation, and develop two optimization strategies: naïve gradient-descent and quadratic programming. Alternatively, we reformulate the optimization criterion as a sparse eigenvalue problem, and solve it using a recently proposed backward greedy algorithm. Experimental results on facial feature detection show that the quadratic programming solution provides better selection mechanism for relevant features.
Address Kyoto, Japan
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (down) ICCV
Notes Approved no
Call Number Admin @ si @ RBT2009 Serial 1233
Permanent link to this record
 

 
Author Xavier Boix; Josep M. Gonfaus; Fahad Shahbaz Khan; Joost Van de Weijer; Andrew Bagdanov; Marco Pedersoli; Jordi Gonzalez; Joan Serrat
Title Combining local and global bag-of-word representations for semantic segmentation Type Conference Article
Year 2009 Publication Workshop on The PASCAL Visual Object Classes Challenge Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address Kyoto (Japan)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (down) ICCV
Notes ADAS;ISE Approved no
Call Number ADAS @ adas @ BGS2009 Serial 1273
Permanent link to this record
 

 
Author Fares Alnajar; Theo Gevers; Roberto Valenti; Sennay Ghebreab
Title Calibration-free Gaze Estimation using Human Gaze Patterns Type Conference Article
Year 2013 Publication 15th IEEE International Conference on Computer Vision Abbreviated Journal
Volume Issue Pages 137-144
Keywords
Abstract We present a novel method to auto-calibrate gaze estimators based on gaze patterns obtained from other viewers. Our method is based on the observation that the gaze patterns of humans are indicative of where a new viewer will look at [12]. When a new viewer is looking at a stimulus, we first estimate a topology of gaze points (initial gaze points). Next, these points are transformed so that they match the gaze patterns of other humans to find the correct gaze points. In a flexible uncalibrated setup with a web camera and no chin rest, the proposed method was tested on ten subjects and ten images. The method estimates the gaze points after looking at a stimulus for a few seconds with an average accuracy of 4.3 im. Although the reported performance is lower than what could be achieved with dedicated hardware or calibrated setup, the proposed method still provides a sufficient accuracy to trace the viewer attention. This is promising considering the fact that auto-calibration is done in a flexible setup , without the use of a chin rest, and based only on a few seconds of gaze initialization data. To the best of our knowledge, this is the first work to use human gaze patterns in order to auto-calibrate gaze estimators.
Address Sydney
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (down) ICCV
Notes ALTRES;ISE Approved no
Call Number Admin @ si @ AGV2013 Serial 2365
Permanent link to this record
 

 
Author Hamdi Dibeklioglu; Albert Ali Salah; Theo Gevers
Title Like Father, Like Son: Facial Expression Dynamics for Kinship Verification Type Conference Article
Year 2013 Publication 15th IEEE International Conference on Computer Vision Abbreviated Journal
Volume Issue Pages 1497-1504
Keywords
Abstract Kinship verification from facial appearance is a difficult problem. This paper explores the possibility of employing facial expression dynamics in this problem. By using features that describe facial dynamics and spatio-temporal appearance over smile expressions, we show that it is possible to improve the state of the art in this problem, and verify that it is indeed possible to recognize kinship by resemblance of facial expressions. The proposed method is tested on different kin relationships. On the average, 72.89% verification accuracy is achieved on spontaneous smiles.
Address Sydney
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (down) ICCV
Notes ALTRES;ISE Approved no
Call Number Admin @ si @ DSG2013 Serial 2366
Permanent link to this record
 

 
Author Koen E.A. van de Sande; Jasper Uilings; Theo Gevers; Arnold Smeulders
Title Segmentation as Selective Search for Object Recognition Type Conference Article
Year 2011 Publication 13th IEEE International Conference on Computer Vision Abbreviated Journal
Volume Issue Pages 1879-1886
Keywords
Abstract For object recognition, the current state-of-the-art is based on exhaustive search. However, to enable the use of more expensive features and classifiers and thereby progress beyond the state-of-the-art, a selective search strategy is needed. Therefore, we adapt segmentation as a selective search by reconsidering segmentation: We propose to generate many approximate locations over few and precise object delineations because (1) an object whose location is never generated can not be recognised and (2) appearance and immediate nearby context are most effective for object recognition. Our method is class-independent and is shown to cover 96.7% of all objects in the Pascal VOC 2007 test set using only 1,536 locations per image. Our selective search enables the use of the more expensive bag-of-words method which we use to substantially improve the state-of-the-art by up to 8.5% for 8 out of 20 classes on the Pascal VOC 2010 detection challenge.
Address Barcelona
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1550-5499 ISBN 978-1-4577-1101-5 Medium
Area Expedition Conference (down) ICCV
Notes ISE Approved no
Call Number Admin @ si @ SUG2011 Serial 1780
Permanent link to this record
 

 
Author Shida Beigpour; Joost Van de Weijer
Title Object Recoloring Based on Intrinsic Image Estimation Type Conference Article
Year 2011 Publication 13th IEEE International Conference in Computer Vision Abbreviated Journal
Volume Issue Pages 327 - 334
Keywords
Abstract Object recoloring is one of the most popular photo-editing tasks. The problem of object recoloring is highly under-constrained, and existing recoloring methods limit their application to objects lit by a white illuminant. Application of these methods to real-world scenes lit by colored illuminants, multiple illuminants, or interreflections, results in unrealistic recoloring of objects. In this paper, we focus on the recoloring of single-colored objects presegmented from their background. The single-color constraint allows us to fit a more comprehensive physical model to the object. We demonstrate that this permits us to perform realistic recoloring of objects lit by non-white illuminants, and multiple illuminants. Moreover, the model allows for more realistic handling of illuminant alteration of the scene. Recoloring results captured by uncalibrated cameras demonstrate that the proposed framework obtains realistic recoloring for complex natural images. Furthermore we use the model to transfer color between objects and show that the results are more realistic than existing color transfer methods.
Address Barcelona
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1550-5499 ISBN 978-1-4577-1101-5 Medium
Area Expedition Conference (down) ICCV
Notes CIC Approved no
Call Number Admin @ si @ BeW2011 Serial 1781
Permanent link to this record
 

 
Author E. Serradell; Adriana Romero; R. Leta; Carlo Gatta; Francesc Moreno-Noguer
Title Simultaneous Correspondence and Non-Rigid 3D Reconstruction of the Coronary Tree from Single X-Ray Images Type Conference Article
Year 2011 Publication 13th IEEE International Conference on Computer Vision Abbreviated Journal
Volume Issue Pages 850-857
Keywords
Abstract
Address Barcelona
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (down) ICCV
Notes MILAB Approved no
Call Number Admin @ si @ SRL2011 Serial 1803
Permanent link to this record
 

 
Author Bhaskar Chakraborty; Michael Holte; Thomas B. Moeslund; Jordi Gonzalez; Xavier Roca
Title A Selective Spatio-Temporal Interest Point Detector for Human Action Recognition in Complex Scenes Type Conference Article
Year 2011 Publication 13th IEEE International Conference on Computer Vision Abbreviated Journal
Volume Issue Pages 1776-1783
Keywords
Abstract Recent progress in the field of human action recognition points towards the use of Spatio-Temporal Interest Points (STIPs) for local descriptor-based recognition strategies. In this paper we present a new approach for STIP detection by applying surround suppression combined with local and temporal constraints. Our method is significantly different from existing STIP detectors and improves the performance by detecting more repeatable, stable and distinctive STIPs for human actors, while suppressing unwanted background STIPs. For action representation we use a bag-of-visual words (BoV) model of local N-jet features to build a vocabulary of visual-words. To this end, we introduce a novel vocabulary building strategy by combining spatial pyramid and vocabulary compression techniques, resulting in improved performance and efficiency. Action class specific Support Vector Machine (SVM) classifiers are trained for categorization of human actions. A comprehensive set of experiments on existing benchmark datasets, and more challenging datasets of complex scenes, validate our approach and show state-of-the-art performance.
Address Barcelona
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1550-5499 ISBN 978-1-4577-1101-5 Medium
Area Expedition Conference (down) ICCV
Notes ISE Approved no
Call Number Admin @ si @ CHM2011 Serial 1811
Permanent link to this record
 

 
Author Mohammad Rouhani; Angel Sappa
Title Correspondence Free Registration through a Point-to-Model Distance Minimization Type Conference Article
Year 2011 Publication 13th IEEE International Conference on Computer Vision Abbreviated Journal
Volume Issue Pages 2150-2157
Keywords
Abstract This paper presents a novel formulation, which derives in a smooth minimization problem, to tackle the rigid registration between a given point set and a model set. Unlike most of the existing works, which are based on minimizing a point-wise correspondence term, we propose to describe the model set by means of an implicit representation. It allows a new definition of the registration error, which works beyond the point level representation. Moreover, it could be used in a gradient-based optimization framework. The proposed approach consists of two stages. Firstly, a novel formulation is proposed that relates the registration parameters with the distance between the model and data set. Secondly, the registration parameters are obtained by means of the Levengberg-Marquardt algorithm. Experimental results and comparisons with state of the art show the validity of the proposed framework.
Address Barcelona
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1550-5499 ISBN 978-1-4577-1101-5 Medium
Area Expedition Conference (down) ICCV
Notes ADAS Approved no
Call Number Admin @ si @ RoS2011b; ADAS @ adas @ Serial 1832
Permanent link to this record
 

 
Author Javier Marin; David Vazquez; Antonio Lopez; Jaume Amores; Bastian Leibe
Title Random Forests of Local Experts for Pedestrian Detection Type Conference Article
Year 2013 Publication 15th IEEE International Conference on Computer Vision Abbreviated Journal
Volume Issue Pages 2592 - 2599
Keywords ADAS; Random Forest; Pedestrian Detection
Abstract Pedestrian detection is one of the most challenging tasks in computer vision, and has received a lot of attention in the last years. Recently, some authors have shown the advantages of using combinations of part/patch-based detectors in order to cope with the large variability of poses and the existence of partial occlusions. In this paper, we propose a pedestrian detection method that efficiently combines multiple local experts by means of a Random Forest ensemble. The proposed method works with rich block-based representations such as HOG and LBP, in such a way that the same features are reused by the multiple local experts, so that no extra computational cost is needed with respect to a holistic method. Furthermore, we demonstrate how to integrate the proposed approach with a cascaded architecture in order to achieve not only high accuracy but also an acceptable efficiency. In particular, the resulting detector operates at five frames per second using a laptop machine. We tested the proposed method with well-known challenging datasets such as Caltech, ETH, Daimler, and INRIA. The method proposed in this work consistently ranks among the top performers in all the datasets, being either the best method or having a small difference with the best one.
Address Sydney; Australia; December 2013
Corporate Author Thesis
Publisher IEEE Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1550-5499 ISBN Medium
Area Expedition Conference (down) ICCV
Notes ADAS; 600.057; 600.054 Approved no
Call Number ADAS @ adas @ MVL2013 Serial 2333
Permanent link to this record
 

 
Author Jon Almazan; Albert Gordo; Alicia Fornes; Ernest Valveny
Title Handwritten Word Spotting with Corrected Attributes Type Conference Article
Year 2013 Publication 15th IEEE International Conference on Computer Vision Abbreviated Journal
Volume Issue Pages 1017-1024
Keywords
Abstract We propose an approach to multi-writer word spotting, where the goal is to find a query word in a dataset comprised of document images. We propose an attributes-based approach that leads to a low-dimensional, fixed-length representation of the word images that is fast to compute and, especially, fast to compare. This approach naturally leads to an unified representation of word images and strings, which seamlessly allows one to indistinctly perform query-by-example, where the query is an image, and query-by-string, where the query is a string. We also propose a calibration scheme to correct the attributes scores based on Canonical Correlation Analysis that greatly improves the results on a challenging dataset. We test our approach on two public datasets showing state-of-the-art results.
Address Sydney; Australia; December 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1550-5499 ISBN Medium
Area Expedition Conference (down) ICCV
Notes DAG Approved no
Call Number Admin @ si @ AGF2013 Serial 2327
Permanent link to this record