|   | 
Details
   web
Records
Author Marçal Rusiñol; Lluis Pere de las Heras; Joan Mas; Oriol Ramos Terrades; Dimosthenis Karatzas; Anjan Dutta; Gemma Sanchez; Josep Llados
Title CVC-UAB's participation in the Flowchart Recognition Task of CLEF-IP 2012 Type (up) Conference Article
Year 2012 Publication Conference and Labs of the Evaluation Forum Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address Roma
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CLEF
Notes DAG Approved no
Call Number Admin @ si @ RHM2012 Serial 2072
Permanent link to this record
 

 
Author Miguel Angel Bautista; Antonio Hernandez; Victor Ponce; Xavier Perez Sala; Xavier Baro; Oriol Pujol; Cecilio Angulo; Sergio Escalera
Title Probability-based Dynamic TimeWarping for Gesture Recognition on RGB-D data Type (up) Conference Article
Year 2012 Publication 21st International Conference on Pattern Recognition International Workshop on Depth Image Analysis Abbreviated Journal
Volume 7854 Issue Pages 126-135
Keywords
Abstract Dynamic Time Warping (DTW) is commonly used in gesture recognition tasks in order to tackle the temporal length variability of gestures. In the DTW framework, a set of gesture patterns are compared one by one to a maybe infinite test sequence, and a query gesture category is recognized if a warping cost below a certain threshold is found within the test sequence. Nevertheless, either taking one single sample per gesture category or a set of isolated samples may not encode the variability of such gesture category. In this paper, a probability-based DTW for gesture recognition is proposed. Different samples of the same gesture pattern obtained from RGB-Depth data are used to build a Gaussian-based probabilistic model of the gesture. Finally, the cost of DTW has been adapted accordingly to the new model. The proposed approach is tested in a challenging scenario, showing better performance of the probability-based DTW in comparison to state-of-the-art approaches for gesture recognition on RGB-D data.
Address
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0302-9743 ISBN 978-3-642-40302-6 Medium
Area Expedition Conference WDIA
Notes MILAB; OR;HuPBA;MV Approved no
Call Number Admin @ si @ BHP2012 Serial 2120
Permanent link to this record
 

 
Author Miguel Reyes; Albert Clapes; Luis Felipe Mejia; Jose Ramirez; Juan R Revilla; Sergio Escalera
Title Posture Analysis and Range of Movement Estimation using Depth Maps Type (up) Conference Article
Year 2012 Publication 21st International Conference on Pattern Recognition International Workshop on Depth Image Analysis Abbreviated Journal
Volume 7854 Issue Pages 97-105
Keywords
Abstract World Health Organization estimates that 80% of the world population is affected of back pain during his life. Current practices to analyze back problems are expensive, subjective, and invasive. In this work, we propose a novel tool for posture and range of movement estimation based on the analysis of 3D information from depth maps. Given a set of keypoints defined by the user, RGB and depth data are aligned, depth surface is reconstructed, keypoints are matching using a novel point-to-point fitting procedure, and accurate measurements about posture, spinal curvature, and range of movement are computed. The system shows high precision and reliable measurements, being useful for posture reeducation purposes to prevent musculoskeletal disorders, such as back pain, as well as tracking the posture evolution of patients in rehabilitation treatments.
Address
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0302-9743 ISBN 978-3-642-40302-6 Medium
Area Expedition Conference WDIA
Notes HuPBA;MILAB Approved no
Call Number Admin @ si @ RCM2012 Serial 2121
Permanent link to this record
 

 
Author Antonio Hernandez; Miguel Angel Bautista; Xavier Perez Sala; Victor Ponce; Xavier Baro; Oriol Pujol; Cecilio Angulo; Sergio Escalera
Title BoVDW: Bag-of-Visual-and-Depth-Words for Gesture Recognition Type (up) Conference Article
Year 2012 Publication 21st International Conference on Pattern Recognition Abbreviated Journal
Volume Issue Pages
Keywords
Abstract We present a Bag-of-Visual-and-Depth-Words (BoVDW) model for gesture recognition, an extension of the Bag-of-Visual-Words (BoVW) model, that benefits from the multimodal fusion of visual and depth features. State-of-the-art RGB and depth features, including a new proposed depth descriptor, are analysed and combined in a late fusion fashion. The method is integrated in a continuous gesture recognition pipeline, where Dynamic Time Warping (DTW) algorithm is used to perform prior segmentation of gestures. Results of the method in public data sets, within our gesture recognition pipeline, show better performance in comparison to a standard BoVW model.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1051-4651 ISBN 978-1-4673-2216-4 Medium
Area Expedition Conference ICPR
Notes HuPBA;MV Approved no
Call Number Admin @ si @ HBP2012 Serial 2122
Permanent link to this record
 

 
Author Anjan Dutta; Jaume Gibert; Josep Llados; Horst Bunke; Umapada Pal
Title Combination of Product Graph and Random Walk Kernel for Symbol Spotting in Graphical Documents Type (up) Conference Article
Year 2012 Publication 21st International Conference on Pattern Recognition Abbreviated Journal
Volume Issue Pages 1663-1666
Keywords
Abstract This paper explores the utilization of product graph for spotting symbols on graphical documents. Product graph is intended to find the candidate subgraphs or components in the input graph containing the paths similar to the query graph. The acute angle between two edges and their length ratio are considered as the node labels. In a second step, each of the candidate subgraphs in the input graph is assigned with a distance measure computed by a random walk kernel. Actually it is the minimum of the distances of the component to all the components of the model graph. This distance measure is then used to eliminate dissimilar components. The remaining neighboring components are grouped and the grouped zone is considered as a retrieval zone of a symbol similar to the queried one. The entire method works online, i.e., it doesn't need any preprocessing step. The present paper reports the initial results of the method, which are very encouraging.
Address Tsukuba, Japan
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1051-4651 ISBN 978-1-4673-2216-4 Medium
Area Expedition Conference ICPR
Notes DAG Approved no
Call Number Admin @ si @ DGL2012 Serial 2125
Permanent link to this record
 

 
Author Klaus Broelemann; Anjan Dutta; Xiaoyi Jiang; Josep Llados
Title Hierarchical graph representation for symbol spotting in graphical document images Type (up) Conference Article
Year 2012 Publication Structural, Syntactic, and Statistical Pattern Recognition, Joint IAPR International Workshop Abbreviated Journal
Volume 7626 Issue Pages 529-538
Keywords
Abstract Symbol spotting can be defined as locating given query symbol in a large collection of graphical documents. In this paper we present a hierarchical graph representation for symbols. This representation allows graph matching methods to deal with low-level vectorization errors and, thus, to perform a robust symbol spotting. To show the potential of this approach, we conduct an experiment with the SESYD dataset.
Address Miyajima-Itsukushima, Hiroshima
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN 0302-9743 ISBN 978-3-642-34165-6 Medium
Area Expedition Conference SSPR&SPR
Notes DAG Approved no
Call Number Admin @ si @ BDJ2012 Serial 2126
Permanent link to this record
 

 
Author Javier Vazquez; Robert Benavente; Maria Vanrell
Title Naming constraints constancy Type (up) Conference Article
Year 2012 Publication 2nd Joint AVA / BMVA Meeting on Biological and Machine Vision Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Different studies have shown that languages from industrialized cultures
share a set of 11 basic colour terms: red, green, blue, yellow, pink, purple, brown, orange, black, white, and grey (Berlin & Kay, 1969, Basic Color Terms, University of California Press)( Kay & Regier, 2003, PNAS, 100, 9085-9089). Some of these studies have also reported the best representatives or focal values of each colour (Boynton and Olson, 1990, Vision Res. 30,1311–1317), (Sturges and Whitfield, 1995, CRA, 20:6, 364–376). Some further studies have provided us with fuzzy datasets for color naming by asking human observers to rate colours in terms of membership values (Benavente -et al-, 2006, CRA. 31:1, 48–56,). Recently, a computational model based on these human ratings has been developed (Benavente -et al-, 2008, JOSA-A, 25:10, 2582-2593). This computational model follows a fuzzy approach to assign a colour name to a particular RGB value. For example, a pixel with a value (255,0,0) will be named 'red' with membership 1, while a cyan pixel with a RGB value of (0, 200, 200) will be considered to be 0.5 green and 0.5 blue. In this work, we show how this colour naming paradigm can be applied to different computer vision tasks. In particular, we report results in colour constancy (Vazquez-Corral -et al-, 2012, IEEE TIP, in press) showing that the classical constraints on either illumination or surface reflectance can be substituted by
the statistical properties encoded in the colour names. [Supported by projects TIN2010-21771-C02-1, CSD2007-00018].
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference AV A
Notes CIC Approved no
Call Number Admin @ si @ VBV2012 Serial 2131
Permanent link to this record
 

 
Author Xavier Otazu; Olivier Penacchio; Laura Dempere-Marco
Title An investigation into plausible neural mechanisms related to the the CIWaM computational model for brightness induction Type (up) Conference Article
Year 2012 Publication 2nd Joint AVA / BMVA Meeting on Biological and Machine Vision Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Brightness induction is the modulation of the perceived intensity of an area by the luminance of surrounding areas. From a purely computational perspective, we built a low-level computational model (CIWaM) of early sensory processing based on multi-resolution wavelets with the aim of replicating brightness and colour (Otazu et al., 2010, Journal of Vision, 10(12):5) induction effects. Furthermore, we successfully used the CIWaM architecture to define a computational saliency model (Murray et al, 2011, CVPR, 433-440; Vanrell et al, submitted to AVA/BMVA'12). From a biological perspective, neurophysiological evidence suggests that perceived brightness information may be explicitly represented in V1. In this work we investigate possible neural mechanisms that offer a plausible explanation for such effects. To this end, we consider the model by Z.Li (Li, 1999, Network:Comput. Neural Syst., 10, 187-212) which is based on biological data and focuses on the part of V1 responsible for contextual influences, namely, layer 2-3 pyramidal cells, interneurons, and horizontal intracortical connections. This model has proven to account for phenomena such as visual saliency, which share with brightness induction the relevant effect of contextual influences (the ones modelled by CIWaM). In the proposed model, the input to the network is derived from a complete multiscale and multiorientation wavelet decomposition taken from the computational model (CIWaM).
This model successfully accounts for well known pyschophysical effects (among them: the White's and modied White's effects, the Todorovic, Chevreul, achromatic ring patterns, and grating induction effects) for static contexts and also for brigthness induction in dynamic contexts defined by modulating the luminance of surrounding areas. From a methodological point of view, we conclude that the results obtained by the computational model (CIWaM) are compatible with the ones obtained by the neurodynamical model proposed here.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference AV A
Notes CIC Approved no
Call Number Admin @ si @ OPD2012a Serial 2132
Permanent link to this record
 

 
Author Thanh Ha Do; Salvatore Tabbone; Oriol Ramos Terrades
Title Text/graphic separation using a sparse representation with multi-learned dictionaries Type (up) Conference Article
Year 2012 Publication 21st International Conference on Pattern Recognition Abbreviated Journal
Volume Issue Pages
Keywords Graphics Recognition; Layout Analysis; Document Understandin
Abstract In this paper, we propose a new approach to extract text regions from graphical documents. In our method, we first empirically construct two sequences of learned dictionaries for the text and graphical parts respectively. Then, we compute the sparse representations of all different sizes and non-overlapped document patches in these learned dictionaries. Based on these representations, each patch can be classified into the text or graphic category by comparing its reconstruction errors. Same-sized patches in one category are then merged together to define the corresponding text or graphic layers which are combined to createfinal text/graphic layer. Finally, in a post-processing step, text regions are further filtered out by using some learned thresholds.
Address Tsukuba
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICPR
Notes DAG Approved no
Call Number Admin @ si @ DTR2012a Serial 2135
Permanent link to this record
 

 
Author Thanh Ha Do; Salvatore Tabbone; Oriol Ramos Terrades
Title Noise suppression over bi-level graphical documents using a sparse representation Type (up) Conference Article
Year 2012 Publication Colloque International Francophone sur l'Écrit et le Document Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address Bordeaux
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CIFED
Notes DAG Approved no
Call Number Admin @ si @ DTR2012b Serial 2136
Permanent link to this record
 

 
Author Adriana Romero; Simeon Petkov; Carlo Gatta; M.Sabate; Petia Radeva
Title Efficient automatic segmentation of vessels Type (up) Conference Article
Year 2012 Publication 16th Conference on Medical Image Understanding and Analysis Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address Swansea, United Kingdom
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference MIUA
Notes MILAB Approved no
Call Number Admin @ si @ Serial 2137
Permanent link to this record
 

 
Author David Masip; Alexander Todorov; Jordi Vitria
Title The Role of Facial Regions in Evaluating Social Dime Type (up) Conference Article
Year 2012 Publication 12th European Conference on Computer Vision – Workshops and Demonstrations Abbreviated Journal
Volume 7584 Issue II Pages 210-219
Keywords Workshops and Demonstrations
Abstract Facial trait judgments are an important information cue for people. Recent works in the Psychology field have stated the basis of face evaluation, defining a set of traits that we evaluate from faces (e.g. dominance, trustworthiness, aggressiveness, attractiveness, threatening or intelligence among others). We rapidly infer information from others faces, usually after a short period of time (< 1000ms) we perceive a certain degree of dominance or trustworthiness of another person from the face. Although these perceptions are not necessarily accurate, they influence many important social outcomes (such as the results of the elections or the court decisions). This topic has also attracted the attention of Computer Vision scientists, and recently a computational model to automatically predict trait evaluations from faces has been proposed. These systems try to mimic the human perception by means of applying machine learning classifiers to a set of labeled data. In this paper we perform an experimental study on the specific facial features that trigger the social inferences. Using previous results from the literature, we propose to use simple similarity maps to evaluate which regions of the face influence the most the trait inferences. The correlation analysis is performed using only appearance, and the results from the experiments suggest that each trait is correlated with specific facial characteristics.
Address Florence, Italy
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication Editor Andrea Fusiello, Vittorio Murino, Rita Cucchiara
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN 0302-9743 ISBN 978-3-642-33867-0 Medium
Area Expedition Conference ECCVW
Notes OR;MV Approved no
Call Number Admin @ si @ MTV2012 Serial 2171
Permanent link to this record
 

 
Author Pedro Martins; Carlo Gatta; Paulo Carvalho
Title Feature-driven Maximally Stable Extremal Regions Type (up) Conference Article
Year 2012 Publication 7th International Conference on Computer Vision Theory and Applications Abbreviated Journal
Volume Issue Pages 490-497
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference VISAPP
Notes MILAB Approved no
Call Number Admin @ si @ MGC2012 Serial 2139
Permanent link to this record
 

 
Author Pedro Martins; Paulo Carvalho; Carlo Gatta
Title Context Aware Keypoint Extraction for Robust Image Representation Type (up) Conference Article
Year 2012 Publication 23rd British Machine Vision Conference Abbreviated Journal
Volume Issue Pages 100.1 - 100.12
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference BMVC
Notes MILAB Approved no
Call Number Admin @ si @ MCG2012a Serial 2140
Permanent link to this record
 

 
Author Josep M. Gonfaus; Theo Gevers; Arjan Gijsenij; Xavier Roca; Jordi Gonzalez
Title Edge Classification using Photo-Geo metric features Type (up) Conference Article
Year 2012 Publication 21st International Conference on Pattern Recognition Abbreviated Journal
Volume Issue Pages 1497 - 1500
Keywords
Abstract Edges are caused by several imaging cues such as shadow, material and illumination transitions. Classification methods have been proposed which are solely based on photometric information, ignoring geometry to classify the physical nature of edges in images. In this paper, the aim is to present a novel strategy to handle both photometric and geometric information for edge classification. Photometric information is obtained through the use of quasi-invariants while geometric information is derived from the orientation and contrast of edges. Different combination frameworks are compared with a new principled approach that captures both information into the same descriptor. From large scale experiments on different datasets, it is shown that, in addition to photometric information, the geometry of edges is an important visual cue to distinguish between different edge types. It is concluded that by combining both cues the performance improves by more than 7% for shadows and highlights.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1051-4651 ISBN 978-1-4673-2216-4 Medium
Area Expedition Conference ICPR
Notes ISE Approved no
Call Number Admin @ si @ GGG2012b Serial 2142
Permanent link to this record