|   | 
Details
   web
Records
Author Shida Beigpour; Marc Serra; Joost Van de Weijer; Robert Benavente; Maria Vanrell; Olivier Penacchio; Dimitris Samaras
Title Intrinsic Image Evaluation On Synthetic Complex Scenes Type Conference Article
Year 2013 Publication 20th IEEE International Conference on Image Processing Abbreviated Journal
Volume (up) Issue Pages 285 - 289
Keywords
Abstract Scene decomposition into its illuminant, shading, and reflectance intrinsic images is an essential step for scene understanding. Collecting intrinsic image groundtruth data is a laborious task. The assumptions on which the ground-truth
procedures are based limit their application to simple scenes with a single object taken in the absence of indirect lighting and interreflections. We investigate synthetic data for intrinsic image research since the extraction of ground truth is straightforward, and it allows for scenes in more realistic situations (e.g, multiple illuminants and interreflections). With this dataset we aim to motivate researchers to further explore intrinsic image decomposition in complex scenes.
Address Melbourne; Australia; September 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICIP
Notes CIC; 600.048; 600.052; 600.051 Approved no
Call Number Admin @ si @ BSW2013 Serial 2264
Permanent link to this record
 

 
Author Rahat Khan; Joost Van de Weijer; Fahad Shahbaz Khan; Damien Muselet; christophe Ducottet; Cecile Barat
Title Discriminative Color Descriptors Type Conference Article
Year 2013 Publication IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal
Volume (up) Issue Pages 2866 - 2873
Keywords
Abstract Color description is a challenging task because of large variations in RGB values which occur due to scene accidental events, such as shadows, shading, specularities, illuminant color changes, and changes in viewing geometry. Traditionally, this challenge has been addressed by capturing the variations in physics-based models, and deriving invariants for the undesired variations. The drawback of this approach is that sets of distinguishable colors in the original color space are mapped to the same value in the photometric invariant space. This results in a drop of discriminative power of the color description. In this paper we take an information theoretic approach to color description. We cluster color values together based on their discriminative power in a classification problem. The clustering has the explicit objective to minimize the drop of mutual information of the final representation. We show that such a color description automatically learns a certain degree of photometric invariance. We also show that a universal color representation, which is based on other data sets than the one at hand, can obtain competing performance. Experiments show that the proposed descriptor outperforms existing photometric invariants. Furthermore, we show that combined with shape description these color descriptors obtain excellent results on four challenging datasets, namely, PASCAL VOC 2007, Flowers-102, Stanford dogs-120 and Birds-200.
Address Portland; Oregon; June 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1063-6919 ISBN Medium
Area Expedition Conference CVPR
Notes CIC; 600.048 Approved no
Call Number Admin @ si @ KWK2013a Serial 2262
Permanent link to this record
 

 
Author Christophe Rigaud; Dimosthenis Karatzas; Joost Van de Weijer; Jean-Christophe Burie; Jean-Marc Ogier
Title Automatic text localisation in scanned comic books Type Conference Article
Year 2013 Publication Proceedings of the International Conference on Computer Vision Theory and Applications Abbreviated Journal
Volume (up) Issue Pages 814-819
Keywords Text localization; comics; text/graphic separation; complex background; unstructured document
Abstract Comic books constitute an important cultural heritage asset in many countries. Digitization combined with subsequent document understanding enable direct content-based search as opposed to metadata only search (e.g. album title or author name). Few studies have been done in this direction. In this work we detail a novel approach for the automatic text localization in scanned comics book pages, an essential step towards a fully automatic comics book understanding. We focus on speech text as it is semantically important and represents the majority of the text present in comics. The approach is compared with existing methods of text localization found in the literature and results are presented.
Address Barcelona; February 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference VISAPP
Notes DAG; CIC; 600.056 Approved no
Call Number Admin @ si @ RKW2013b Serial 2261
Permanent link to this record
 

 
Author Christophe Rigaud; Dimosthenis Karatzas; Joost Van de Weijer; Jean-Christophe Burie; Jean-Marc Ogier
Title An active contour model for speech balloon detection in comics Type Conference Article
Year 2013 Publication 12th International Conference on Document Analysis and Recognition Abbreviated Journal
Volume (up) Issue Pages 1240-1244
Keywords
Abstract Comic books constitute an important cultural heritage asset in many countries. Digitization combined with subsequent comic book understanding would enable a variety of new applications, including content-based retrieval and content retargeting. Document understanding in this domain is challenging as comics are semi-structured documents, combining semantically important graphical and textual parts. Few studies have been done in this direction. In this work we detail a novel approach for closed and non-closed speech balloon localization in scanned comic book pages, an essential step towards a fully automatic comic book understanding. The approach is compared with existing methods for closed balloon localization found in the literature and results are presented.
Address washington; USA; August 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1520-5363 ISBN Medium
Area Expedition Conference ICDAR
Notes DAG; CIC; 600.056 Approved no
Call Number Admin @ si @ RKW2013a Serial 2260
Permanent link to this record
 

 
Author Laura Igual; Xavier Baro
Title Experiencia de aprendizaje de programación basada en proyectos. Simposio-Taller Estrategias y herramientas para el aprendizaje y la evaluación Type Miscellaneous
Year 2013 Publication Simposio-Taller Estrategias y herramientas para el aprendizaje y la evaluación, de las XIX Jornadas sobre la Enseñanza Universitaria de la Informática Abbreviated Journal
Volume (up) Issue Pages
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference JENUI
Notes OR;HuPBA;MV Approved no
Call Number Admin @ si @ IgB2013 Serial 2257
Permanent link to this record
 

 
Author S.Grau; Anna Puig; Sergio Escalera; Maria Salamo; Oscar Amoros
Title Efficient complementary viewpoint selection in volume rendering Type Conference Article
Year 2013 Publication 21st WSCG Conference on Computer Graphics, Abbreviated Journal
Volume (up) Issue Pages
Keywords Dual camera; Visualization; Interactive Interfaces; Dynamic Time Warping.
Abstract A major goal of visualization is to appropriately express knowledge of scientific data. Generally, gathering visual information contained in the volume data often requires a lot of expertise from the final user to setup the parameters of the visualization. One way of alleviating this problem is to provide the position of inner structures with different viewpoint locations to enhance the perception and construction of the mental image. To this end, traditional illustrations use two or three different views of the regions of interest. Similarly, with the aim of assisting the users to easily place a good viewpoint location, this paper proposes an automatic and interactive method that locates different complementary viewpoints from a reference camera in volume datasets. Specifically, the proposed method combines the quantity of information each camera provides for each structure and the shape similarity of the projections of the remaining viewpoints based on Dynamic Time Warping. The selected complementary viewpoints allow a better understanding of the focused structure in several applications. Thus, the user interactively receives feedback based on several viewpoints that helps him to understand the visual information. A live-user evaluation on different data sets show a good convergence to useful complementary viewpoints.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-808694374-9 Medium
Area Expedition Conference WSCG
Notes HuPBA; 600.046;MILAB Approved no
Call Number Admin @ si @ GPE2013a Serial 2255
Permanent link to this record
 

 
Author Vitaliy Konovalov; Albert Clapes; Sergio Escalera
Title Automatic Hand Detection in RGB-Depth Data Sequences Type Conference Article
Year 2013 Publication 16th Catalan Conference on Artificial Intelligence Abbreviated Journal
Volume (up) Issue Pages 91-100
Keywords
Abstract Detecting hands in multi-modal RGB-Depth visual data has become a challenging Computer Vision problem with several applications of interest. This task involves dealing with changes in illumination, viewpoint variations, the articulated nature of the human body, the high flexibility of the wrist articulation, and the deformability of the hand itself. In this work, we propose an accurate and efficient automatic hand detection scheme to be applied in Human-Computer Interaction (HCI) applications in which the user is seated at the desk and, thus, only the upper body is visible. Our main hypothesis is that hand landmarks remain at a nearly constant geodesic distance from an automatically located anatomical reference point.
In a given frame, the human body is segmented first in the depth image. Then, a
graph representation of the body is built in which the geodesic paths are computed from the reference point. The dense optical flow vectors on the corresponding RGB image are used to reduce ambiguities of the geodesic paths’ connectivity, allowing to eliminate false edges interconnecting different body parts. Finally, we are able to detect the position of both hands based on invariant geodesic distances and optical flow within the body region, without involving costly learning procedures.
Address Vic; October 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CCIA
Notes HuPBA;MILAB Approved no
Call Number Admin @ si @ KCE2013 Serial 2323
Permanent link to this record
 

 
Author Andreas Møgelmose; Chris Bahnsen; Thomas B. Moeslund; Albert Clapes; Sergio Escalera
Title Tri-modal Person Re-identification with RGB, Depth and Thermal Features Type Conference Article
Year 2013 Publication 9th IEEE Workshop on Perception beyond the visible Spectrum, Computer Vision and Pattern Recognition Abbreviated Journal
Volume (up) Issue Pages 301-307
Keywords
Abstract Person re-identification is about recognizing people who have passed by a sensor earlier. Previous work is mainly based on RGB data, but in this work we for the first time present a system where we combine RGB, depth, and thermal data for re-identification purposes. First, from each of the three modalities, we obtain some particular features: from RGB data, we model color information from different regions of the body, from depth data, we compute different soft body biometrics, and from thermal data, we extract local structural information. Then, the three information types are combined in a joined classifier. The tri-modal system is evaluated on a new RGB-D-T dataset, showing successful results in re-identification scenarios.
Address Portland; oregon; June 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-0-7695-4990-3 Medium
Area Expedition Conference CVPRW
Notes HUPBA;MILAB Approved no
Call Number Admin @ si @ MBM2013 Serial 2253
Permanent link to this record
 

 
Author Alicia Fornes; Xavier Otazu; Josep Llados
Title Show through cancellation and image enhancement by multiresolution contrast processing Type Conference Article
Year 2013 Publication 12th International Conference on Document Analysis and Recognition Abbreviated Journal
Volume (up) Issue Pages 200-204
Keywords
Abstract Historical documents suffer from different types of degradation and noise such as background variation, uneven illumination or dark spots. In case of double-sided documents, another common problem is that the back side of the document usually interferes with the front side because of the transparency of the document or ink bleeding. This effect is called the show through phenomenon. Many methods are developed to solve these problems, and in the case of show-through, by scanning and matching both the front and back sides of the document. In contrast, our approach is designed to use only one side of the scanned document. We hypothesize that show-trough are low contrast components, while foreground components are high contrast ones. A Multiresolution Contrast (MC) decomposition is presented in order to estimate the contrast of features at different spatial scales. We cancel the show-through phenomenon by thresholding these low contrast components. This decomposition is also able to enhance the image removing shadowed areas by weighting spatial scales. Results show that the enhanced images improve the readability of the documents, allowing scholars both to recover unreadable words and to solve ambiguities.
Address Washington; USA; August 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1520-5363 ISBN Medium
Area Expedition Conference ICDAR
Notes DAG; 602.006; 600.045; 600.061; 600.052;CIC Approved no
Call Number Admin @ si @ FOL2013 Serial 2241
Permanent link to this record
 

 
Author Santiago Segui; Michal Drozdzal; Ekaterina Zaytseva; Carolina Malagelada; Fernando Azpiroz; Petia Radeva; Jordi Vitria
Title A new image centrality descriptor for wrinkle frame detection in WCE videos Type Conference Article
Year 2013 Publication 13th IAPR Conference on Machine Vision Applications Abbreviated Journal
Volume (up) Issue Pages
Keywords
Abstract Small bowel motility dysfunctions are a widespread functional disorder characterized by abdominal pain and altered bowel habits in the absence of specific and unique organic pathology. Current methods of diagnosis are complex and can only be conducted at some highly specialized referral centers. Wireless Video Capsule Endoscopy (WCE) could be an interesting diagnostic alternative that presents excellent clinical advantages, since it is non-invasive and can be conducted by non specialists. The purpose of this work is to present a new method for the detection of wrinkle frames in WCE, a critical characteristic to detect one of the main motility events: contractions. The method goes beyond the use of one of the classical image feature, the Histogram
Address Kyoto; Japan; May 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference MVA
Notes OR; MILAB; 600.046;MV Approved no
Call Number Admin @ si @ SDZ2013 Serial 2239
Permanent link to this record
 

 
Author Xavier Baro; David Masip; Elena Planas; Julia Minguillon
Title PeLP: Plataforma para el Aprendizaje de Lenguajes de Programación Type Miscellaneous
Year 2013 Publication XV Jornadas de Enseñanza Universitaria de la Informatica Abbreviated Journal
Volume (up) Issue Pages
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference JENUI
Notes OR;HuPBA;MV Approved no
Call Number Admin @ si @ BMP2013 Serial 2237
Permanent link to this record
 

 
Author Victor Borjas; Jordi Vitria; Petia Radeva
Title Gradient Histogram Background Modeling for People Detection in Stationary Camera Environments Type Conference Article
Year 2013 Publication 13th IAPR Conference on Machine Vision Applications Abbreviated Journal
Volume (up) Issue Pages
Keywords
Abstract Best Poster AwardOne of the big challenges of today person detectors is the decreasing of the false positive rate. In this paper, we propose a novel framework to customize person detectors in static camera scenarios in order to reduce this rate. This scheme includes background modeling for subtraction based on gradient histograms and Mean-Shift clustering. Our experiments show that the detection improved compared to using only the output from the pedestrian detector reducing 87% of the false positives and therefore the overall precision of the detection
was increased signi cantly.
Address Kyoto; Japan; May 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference MVA
Notes OR; MILAB;MV Approved no
Call Number BVR2013 Serial 2238
Permanent link to this record
 

 
Author Fadi Dornaika; Bogdan Raducanu
Title Out-of-Sample Embedding for Manifold Learning Applied to Face Recognition Type Conference Article
Year 2013 Publication IEEE International Workshop on Analysis and Modeling of Faces and Gestures Abbreviated Journal
Volume (up) Issue Pages 862-868
Keywords
Abstract Manifold learning techniques are affected by two critical aspects: (i) the design of the adjacency graphs, and (ii) the embedding of new test data---the out-of-sample problem. For the first aspect, the proposed schemes were heuristically driven. For the second aspect, the difficulty resides in finding an accurate mapping that transfers unseen data samples into an existing manifold. Past works addressing these two aspects were heavily parametric in the sense that the optimal performance is only reached for a suitable parameter choice that should be known in advance. In this paper, we demonstrate that sparse coding theory not only serves for automatic graph reconstruction as shown in recent works, but also represents an accurate alternative for out-of-sample embedding. Considering for a case study the Laplacian Eigenmaps, we applied our method to the face recognition problem. To evaluate the effectiveness of the proposed out-of-sample embedding, experiments are conducted using the k-nearest neighbor (KNN) and Kernel Support Vector Machines (KSVM) classifiers on four public face databases. The experimental results show that the proposed model is able to achieve high categorization effectiveness as well as high consistency with non-linear embeddings/manifolds obtained in batch modes.
Address Portland; USA; June 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CVPRW
Notes OR; 600.046;MV Approved no
Call Number Admin @ si @ DoR2013 Serial 2236
Permanent link to this record
 

 
Author German Ros; J. Guerrero; Angel Sappa; Antonio Lopez
Title VSLAM pose initialization via Lie groups and Lie algebras optimization Type Conference Article
Year 2013 Publication Proceedings of IEEE International Conference on Robotics and Automation Abbreviated Journal
Volume (up) Issue Pages 5740 - 5747
Keywords SLAM
Abstract We present a novel technique for estimating initial 3D poses in the context of localization and Visual SLAM problems. The presented approach can deal with noise, outliers and a large amount of input data and still performs in real time in a standard CPU. Our method produces solutions with an accuracy comparable to those produced by RANSAC but can be much faster when the percentage of outliers is high or for large amounts of input data. On the current work we propose to formulate the pose estimation as an optimization problem on Lie groups, considering their manifold structure as well as their associated Lie algebras. This allows us to perform a fast and simple optimization at the same time that conserve all the constraints imposed by the Lie group SE(3). Additionally, we present several key design concepts related with the cost function and its Jacobian; aspects that are critical for the good performance of the algorithm.
Address Karlsruhe; Germany; May 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1050-4729 ISBN 978-1-4673-5641-1 Medium
Area Expedition Conference ICRA
Notes ADAS; 600.054; 600.055; 600.057 Approved no
Call Number Admin @ si @ RGS2013a; ADAS @ adas @ Serial 2225
Permanent link to this record
 

 
Author David Aldavert; Marçal Rusiñol; Ricardo Toledo; Josep Llados
Title Integrating Visual and Textual Cues for Query-by-String Word Spotting Type Conference Article
Year 2013 Publication 12th International Conference on Document Analysis and Recognition Abbreviated Journal
Volume (up) Issue Pages 511 - 515
Keywords
Abstract In this paper, we present a word spotting framework that follows the query-by-string paradigm where word images are represented both by textual and visual representations. The textual representation is formulated in terms of character $n$-grams while the visual one is based on the bag-of-visual-words scheme. These two representations are merged together and projected to a sub-vector space. This transform allows to, given a textual query, retrieve word instances that were only represented by the visual modality. Moreover, this statistical representation can be used together with state-of-the-art indexation structures in order to deal with large-scale scenarios. The proposed method is evaluated using a collection of historical documents outperforming state-of-the-art performances.
Address Washington; USA; August 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1520-5363 ISBN Medium
Area Expedition Conference ICDAR
Notes DAG; ADAS; 600.045; 600.055; 600.061 Approved no
Call Number Admin @ si @ ART2013 Serial 2224
Permanent link to this record