toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links (down)
Author Josep M. Gonfaus; Theo Gevers; Arjan Gijsenij; Xavier Roca; Jordi Gonzalez edit   pdf
url  isbn
openurl 
  Title Edge Classification using Photo-Geo metric features Type Conference Article
  Year 2012 Publication 21st International Conference on Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 1497 - 1500  
  Keywords  
  Abstract Edges are caused by several imaging cues such as shadow, material and illumination transitions. Classification methods have been proposed which are solely based on photometric information, ignoring geometry to classify the physical nature of edges in images. In this paper, the aim is to present a novel strategy to handle both photometric and geometric information for edge classification. Photometric information is obtained through the use of quasi-invariants while geometric information is derived from the orientation and contrast of edges. Different combination frameworks are compared with a new principled approach that captures both information into the same descriptor. From large scale experiments on different datasets, it is shown that, in addition to photometric information, the geometry of edges is an important visual cue to distinguish between different edge types. It is concluded that by combining both cues the performance improves by more than 7% for shadows and highlights.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1051-4651 ISBN 978-1-4673-2216-4 Medium  
  Area Expedition Conference ICPR  
  Notes ISE Approved no  
  Call Number Admin @ si @ GGG2012b Serial 2142  
Permanent link to this record
 

 
Author Shida Beigpour; Joost Van de Weijer edit   pdf
url  doi
isbn  openurl
  Title Object Recoloring Based on Intrinsic Image Estimation Type Conference Article
  Year 2011 Publication 13th IEEE International Conference in Computer Vision Abbreviated Journal  
  Volume Issue Pages 327 - 334  
  Keywords  
  Abstract Object recoloring is one of the most popular photo-editing tasks. The problem of object recoloring is highly under-constrained, and existing recoloring methods limit their application to objects lit by a white illuminant. Application of these methods to real-world scenes lit by colored illuminants, multiple illuminants, or interreflections, results in unrealistic recoloring of objects. In this paper, we focus on the recoloring of single-colored objects presegmented from their background. The single-color constraint allows us to fit a more comprehensive physical model to the object. We demonstrate that this permits us to perform realistic recoloring of objects lit by non-white illuminants, and multiple illuminants. Moreover, the model allows for more realistic handling of illuminant alteration of the scene. Recoloring results captured by uncalibrated cameras demonstrate that the proposed framework obtains realistic recoloring for complex natural images. Furthermore we use the model to transfer color between objects and show that the results are more realistic than existing color transfer methods.  
  Address Barcelona  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1550-5499 ISBN 978-1-4577-1101-5 Medium  
  Area Expedition Conference ICCV  
  Notes CIC Approved no  
  Call Number Admin @ si @ BeW2011 Serial 1781  
Permanent link to this record
 

 
Author Naila Murray; Maria Vanrell; Xavier Otazu; C. Alejandro Parraga edit   pdf
url  doi
isbn  openurl
  Title Saliency Estimation Using a Non-Parametric Low-Level Vision Model Type Conference Article
  Year 2011 Publication IEEE conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 433-440  
  Keywords Gaussian mixture model;ad hoc parameter selection;center-surround inhibition windows;center-surround mechanism;color appearance model;convolution;eye-fixation data;human vision;innate spatial pooling mechanism;inverse wavelet transform;low-level visual front-end;nonparametric low-level vision model;saliency estimation;saliency map;scale integration;scale-weighted center-surround response;scale-weighting function;visual task;Gaussian processes;biology;biology computing;colour vision;computer vision;visual perception;wavelet transforms  
  Abstract Many successful models for predicting attention in a scene involve three main steps: convolution with a set of filters, a center-surround mechanism and spatial pooling to construct a saliency map. However, integrating spatial information and justifying the choice of various parameter values remain open problems. In this paper we show that an efficient model of color appearance in human vision, which contains a principled selection of parameters as well as an innate spatial pooling mechanism, can be generalized to obtain a saliency model that outperforms state-of-the-art models. Scale integration is achieved by an inverse wavelet transform over the set of scale-weighted center-surround responses. The scale-weighting function (termed ECSF) has been optimized to better replicate psychophysical data on color appearance, and the appropriate sizes of the center-surround inhibition windows have been determined by training a Gaussian Mixture Model on eye-fixation data, thus avoiding ad-hoc parameter selection. Additionally, we conclude that the extension of a color appearance model to saliency estimation adds to the evidence for a common low-level visual front-end for different visual tasks.  
  Address Colorado Springs  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1063-6919 ISBN 978-1-4577-0394-2 Medium  
  Area Expedition Conference CVPR  
  Notes CIC Approved no  
  Call Number Admin @ si @ MVO2011 Serial 1757  
Permanent link to this record
 

 
Author Aura Hernandez-Sabate; Monica Mitiko; Sergio Shiguemi; Debora Gil edit   pdf
url  isbn
openurl 
  Title A validation protocol for assessing cardiac phase retrieval in IntraVascular UltraSound Type Conference Article
  Year 2010 Publication Computing in Cardiology Abbreviated Journal  
  Volume 37 Issue Pages 899-902  
  Keywords  
  Abstract A good reliable approach to cardiac triggering is of utmost importance in obtaining accurate quantitative results of atherosclerotic plaque burden from the analysis of IntraVascular UltraSound. Although, in the last years, there has been an increase in research of methods for retrospective gating, there is no general consensus in a validation protocol. Many methods are based on quality assessment of longitudinal cuts appearance and those reporting quantitative numbers do not follow a standard protocol. Such heterogeneity in validation protocols makes faithful comparison across methods a difficult task. We propose a validation protocol based on the variability of the retrieved cardiac phase and explore the capability of several quality measures for quantifying such variability. An ideal detector, suitable for its application in clinical practice, should produce stable phases. That is, it should always sample the same cardiac cycle fraction. In this context, one should measure the variability (variance) of a candidate sampling with respect a ground truth (reference) sampling, since the variance would indicate how spread we are aiming a target. In order to quantify the deviation between the sampling and the ground truth, we have considered two quality scores reported in the literature: signed distance to the closest reference sample and distance to the right of each reference sample. We have also considered the residuals of the regression line of reference against candidate sampling. The performance of the measures has been explored on a set of synthetic samplings covering different cardiac cycle fractions and variabilities. From our simulations, we conclude that the metrics related to distances are sensitive to the shift considered while the residuals are robust against fraction and variabilities as far as one can establish a pair-wise correspondence between candidate and reference. We will further investigate the impact of false positive and negative detections in experimental data.  
  Address  
  Corporate Author Thesis  
  Publisher IEEE Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0276-6547 ISBN 978-1-4244-7318-2 Medium  
  Area Expedition Conference CINC  
  Notes IAM; Approved no  
  Call Number IAM @ iam @ HSM2010 Serial 1551  
Permanent link to this record
 

 
Author Arjan Gijsenij; Theo Gevers; Joost Van de Weijer edit   pdf
url  doi
openurl 
  Title Computational Color Constancy: Survey and Experiments Type Journal Article
  Year 2011 Publication IEEE Transactions on Image Processing Abbreviated Journal TIP  
  Volume 20 Issue 9 Pages 2475-2489  
  Keywords computational color constancy;computer vision application;gamut-based method;learning-based method;static method;colour vision;computer vision;image colour analysis;learning (artificial intelligence);lighting  
  Abstract Computational color constancy is a fundamental prerequisite for many computer vision applications. This paper presents a survey of many recent developments and state-of-the- art methods. Several criteria are proposed that are used to assess the approaches. A taxonomy of existing algorithms is proposed and methods are separated in three groups: static methods, gamut-based methods and learning-based methods. Further, the experimental setup is discussed including an overview of publicly available data sets. Finally, various freely available methods, of which some are considered to be state-of-the-art, are evaluated on two data sets.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1057-7149 ISBN Medium  
  Area Expedition Conference  
  Notes ISE;CIC Approved no  
  Call Number Admin @ si @ GGW2011 Serial 1717  
Permanent link to this record
 

 
Author Jaume Garcia; Albert Andaluz; Debora Gil; Francesc Carreras edit   pdf
url  doi
isbn  openurl
  Title Decoupled External Forces in a Predictor-Corrector Segmentation Scheme for LV Contours in Tagged MR Images Type Conference Article
  Year 2010 Publication 32nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society Abbreviated Journal  
  Volume Issue Pages 4805-4808  
  Keywords  
  Abstract Computation of functional regional scores requires proper identification of LV contours. On one hand, manual segmentation is robust, but it is time consuming and requires high expertise. On the other hand, the tag pattern in TMR sequences is a problem for automatic segmentation of LV boundaries. We propose a segmentation method based on a predictorcorrector (Active Contours – Shape Models) scheme. Special stress is put in the definition of the AC external forces. First, we introduce a semantic description of the LV that discriminates myocardial tissue by using texture and motion descriptors. Second, in order to ensure convergence regardless of the initial contour, the external energy is decoupled according to the orientation of the edges in the image potential. We have validated the model in terms of error in segmented contours and accuracy of regional clinical scores.  
  Address Buenos Aires (Argentina)  
  Corporate Author IEEE EMB Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1557-170X ISBN 978-1-4244-4123-5 Medium  
  Area Expedition Conference EMBC  
  Notes IAM Approved no  
  Call Number IAM @ iam @ GAG2010 Serial 1514  
Permanent link to this record
 

 
Author Juan Ramon Terven Salinas; Bogdan Raducanu; Maria Elena Meza-de-Luna; Joaquin Salas edit   pdf
url  doi
openurl 
  Title Evaluating Real-Time Mirroring of Head Gestures using Smart Glasses Type Conference Article
  Year 2015 Publication 16th IEEE International Conference on Computer Vision Workshops Abbreviated Journal  
  Volume Issue Pages 452-460  
  Keywords  
  Abstract Mirroring occurs when one person tends to mimic the non-verbal communication of their counterparts. Even though mirroring is a complex phenomenon, in this study, we focus on the detection of head-nodding as a simple non-verbal communication cue due to its significance as a gesture displayed during social interactions. This paper introduces a computer vision-based method to detect mirroring through the analysis of head gestures using wearable cameras (smart glasses). In addition, we study how such a method can be used to explore perceived competence. The proposed method has been evaluated and the experiments demonstrate how static and wearable cameras seem to be equally effective to gather the information required for the analysis.  
  Address Santiago de Chile; December 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICCVW  
  Notes OR; 600.068; 600.072;MV Approved no  
  Call Number Admin @ si @ TRM2015 Serial 2722  
Permanent link to this record
 

 
Author Sergio Escalera; Junior Fabian; Pablo Pardo; Xavier Baro; Jordi Gonzalez; Hugo Jair Escalante; Marc Oliu; Dusan Misevic; Ulrich Steiner; Isabelle Guyon edit   pdf
url  doi
openurl 
  Title ChaLearn Looking at People 2015: Apparent Age and Cultural Event Recognition Datasets and Results Type Conference Article
  Year 2015 Publication 16th IEEE International Conference on Computer Vision Workshops Abbreviated Journal  
  Volume Issue Pages 243 - 251  
  Keywords  
  Abstract Following previous series on Looking at People (LAP) competitions [14, 13, 11, 12, 2], in 2015 ChaLearn ran two new competitions within the field of Looking at People: (1) age estimation, and (2) cultural event recognition, both in
still images. We developed a crowd-sourcing application to collect and label data about the apparent age of people (as opposed to the real age). In terms of cultural event recognition, one hundred categories had to be recognized. These
tasks involved scene understanding and human body analysis. This paper summarizes both challenges and data, as well as the results achieved by the participants of the competition.
 
  Address Santiago de Chile; December 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICCVW  
  Notes ISE; 600.063; 600.078;MV Approved no  
  Call Number Admin @ si @ EFP2015 Serial 2704  
Permanent link to this record
 

 
Author Adria Ruiz; Joost Van de Weijer; Xavier Binefa edit   pdf
url  doi
openurl 
  Title From emotions to action units with hidden and semi-hidden-task learning Type Conference Article
  Year 2015 Publication 16th IEEE International Conference on Computer Vision Abbreviated Journal  
  Volume Issue Pages 3703-3711  
  Keywords  
  Abstract Limited annotated training data is a challenging problem in Action Unit recognition. In this paper, we investigate how the use of large databases labelled according to the 6 universal facial expressions can increase the generalization ability of Action Unit classifiers. For this purpose, we propose a novel learning framework: Hidden-Task Learning. HTL aims to learn a set of Hidden-Tasks (Action Units)for which samples are not available but, in contrast, training data is easier to obtain from a set of related VisibleTasks (Facial Expressions). To that end, HTL is able to exploit prior knowledge about the relation between Hidden and Visible-Tasks. In our case, we base this prior knowledge on empirical psychological studies providing statistical correlations between Action Units and universal facial expressions. Additionally, we extend HTL to Semi-Hidden Task Learning (SHTL) assuming that Action Unit training samples are also provided. Performing exhaustive experiments over four different datasets, we show that HTL and SHTL improve the generalization ability of AU classifiers by training them with additional facial expression data. Additionally, we show that SHTL achieves competitive performance compared with state-of-the-art Transductive Learning approaches which face the problem of limited training data by using unlabelled test samples during training.  
  Address Santiago de Chile; Chile; December 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICCV  
  Notes LAMP; 600.068; 600.079 Approved no  
  Call Number Admin @ si @ RWB2015 Serial 2671  
Permanent link to this record
 

 
Author Miguel Oliveira; L. Seabra Lopes; G. Hyun Lim; S. Hamidreza Kasaei; Angel Sappa; A. Tom edit   pdf
url  doi
openurl 
  Title Concurrent Learning of Visual Codebooks and Object Categories in Openended Domains Type Conference Article
  Year 2015 Publication International Conference on Intelligent Robots and Systems Abbreviated Journal  
  Volume Issue Pages 2488 - 2495  
  Keywords Visual Learning; Computer Vision; Autonomous Agents  
  Abstract In open-ended domains, robots must continuously learn new object categories. When the training sets are created offline, it is not possible to ensure their representativeness with respect to the object categories and features the system will find when operating online. In the Bag of Words model, visual codebooks are constructed from training sets created offline. This might lead to non-discriminative visual words and, as a consequence, to poor recognition performance. This paper proposes a visual object recognition system which concurrently learns in an incremental and online fashion both the visual object category representations as well as the codebook words used to encode them. The codebook is defined using Gaussian Mixture Models which are updated using new object views. The approach contains similarities with the human visual object recognition system: evidence suggests that the development of recognition capabilities occurs on multiple levels and is sustained over large periods of time. Results show that the proposed system with concurrent learning of object categories and codebooks is capable of learning more categories, requiring less examples, and with similar accuracies, when compared to the classical Bag of Words approach using offline constructed codebooks.  
  Address Hamburg; Germany; October 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference IROS  
  Notes ADAS; 600.076 Approved no  
  Call Number Admin @ si @ OSL2015 Serial 2664  
Permanent link to this record
 

 
Author Cristhian A. Aguilera-Carrasco; Angel Sappa; Ricardo Toledo edit  url
doi  openurl
  Title LGHD: a Feature Descriptor for Matching Across Non-Linear Intensity Variations Type Conference Article
  Year 2015 Publication 22th IEEE International Conference on Image Processing Abbreviated Journal  
  Volume Issue Pages 178 - 181  
  Keywords  
  Abstract  
  Address Quebec; Canada; September 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICIP  
  Notes ADAS; 600.076 Approved no  
  Call Number Admin @ si @ AST2015 Serial 2630  
Permanent link to this record
 

 
Author Jean-Christophe Burie; J. Chazalon; M.Coustaty; S.Eskenazi; Muhammad Muzzamil Luqman; M.Mehri; N.Nayef; Jean-Marc Ogier; S.Prum; Marçal Rusiñol edit  url
doi  openurl
  Title ICDAR2015 Competition on Smartphone Document Capture and OCR (SmartDoc) Type Conference Article
  Year 2015 Publication 13th International Conference on Document Analysis and Recognition ICDAR2015 Abbreviated Journal  
  Volume Issue Pages 1161 - 1165  
  Keywords  
  Abstract Smartphones are enabling new ways of capture,
hence arises the need for seamless and reliable acquisition and
digitization of documents, in order to convert them to editable,
searchable and a more human-readable format. Current stateof-the-art
works lack databases and baseline benchmarks for
digitizing mobile captured documents. We have organized a
competition for mobile document capture and OCR in order to
address this issue. The competition is structured into two independent
challenges: smartphone document capture, and smartphone
OCR. This report describes the datasets for both challenges
along with their ground truth, details the performance evaluation
protocols which we used, and presents the final results of the
participating methods. In total, we received 13 submissions: 8
for challenge-I, and 5 for challenge-2.
 
  Address Nancy; France; August 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICDAR  
  Notes DAG; 600.077; 601.223; 600.084 Approved no  
  Call Number Admin @ si @ BCC2015 Serial 2681  
Permanent link to this record
 

 
Author Suman Ghosh; Ernest Valveny edit   pdf
url  doi
openurl 
  Title Query by String word spotting based on character bi-gram indexing Type Conference Article
  Year 2015 Publication 13th International Conference on Document Analysis and Recognition ICDAR2015 Abbreviated Journal  
  Volume Issue Pages 881-885  
  Keywords  
  Abstract In this paper we propose a segmentation-free query by string word spotting method. Both the documents and query strings are encoded using a recently proposed word representa- tion that projects images and strings into a common atribute space based on a pyramidal histogram of characters(PHOC). These attribute models are learned using linear SVMs over the Fisher Vector representation of the images along with the PHOC labels of the corresponding strings. In order to search through the whole page, document regions are indexed per character bi- gram using a similar attribute representation. On top of that, we propose an integral image representation of the document using a simplified version of the attribute model for efficient computation. Finally we introduce a re-ranking step in order to boost retrieval performance. We show state-of-the-art results for segmentation-free query by string word spotting in single-writer and multi-writer standard datasets  
  Address Nancy; France; August 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICDAR  
  Notes DAG; 600.077 Approved no  
  Call Number Admin @ si @ GhV2015a Serial 2715  
Permanent link to this record
 

 
Author Pau Riba; Josep Llados; Alicia Fornes edit   pdf
url  doi
openurl 
  Title Handwritten Word Spotting by Inexact Matching of Grapheme Graphs Type Conference Article
  Year 2015 Publication 13th International Conference on Document Analysis and Recognition ICDAR2015 Abbreviated Journal  
  Volume Issue Pages 781 - 785  
  Keywords  
  Abstract This paper presents a graph-based word spotting for handwritten documents. Contrary to most word spotting techniques, which use statistical representations, we propose a structural representation suitable to be robust to the inherent deformations of handwriting. Attributed graphs are constructed using a part-based approach. Graphemes extracted from shape convexities are used as stable units of handwriting, and are associated to graph nodes. Then, spatial relations between them determine graph edges. Spotting is defined in terms of an error-tolerant graph matching using bipartite-graph matching algorithm. To make the method usable in large datasets, a graph indexing approach that makes use of binary embeddings is used as preprocessing. Historical documents are used as experimental framework. The approach is comparable to statistical ones in terms of time and memory requirements, especially when dealing with large document collections.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICDAR  
  Notes DAG; 600.077; 600.061; 602.006 Approved no  
  Call Number Admin @ si @ RLF2015b Serial 2642  
Permanent link to this record
 

 
Author Lluis Pere de las Heras; Oriol Ramos Terrades; Josep Llados edit  url
doi  openurl
  Title Attributed Graph Grammar for floor plan analysis Type Conference Article
  Year 2015 Publication 13th International Conference on Document Analysis and Recognition ICDAR2015 Abbreviated Journal  
  Volume Issue Pages 726 - 730  
  Keywords  
  Abstract In this paper, we propose the use of an Attributed Graph Grammar as unique framework to model and recognize the structure of floor plans. This grammar represents a building as a hierarchical composition of structurally and semantically related elements, where common representations are learned stochastically from annotated data. Given an input image, the parsing consists on constructing that graph representation that better agrees with the probabilistic model defined by the grammar. The proposed method provides several advantages with respect to the traditional floor plan analysis techniques. It uses an unsupervised statistical approach for detecting walls that adapts to different graphical notations and relaxes strong structural assumptions such are straightness and orthogonality. Moreover, the independence between the knowledge model and the parsing implementation allows the method to learn automatically different building configurations and thus, to cope the existing variability. These advantages are clearly demonstrated by comparing it with the most recent floor plan interpretation techniques on 4 datasets of real floor plans with different notations.  
  Address Nancy; France; August 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICDAR  
  Notes DAG; 600.077; 600.061 Approved no  
  Call Number Admin @ si @ HRL2015b Serial 2727  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: