|   | 
Details
   web
Records
Author Jose Manuel Alvarez; Theo Gevers; Antonio Lopez
Title Evaluating Color Representation for Online Road Detection Type (up) Conference Article
Year 2013 Publication ICCV Workshop on Computer Vision in Vehicle Technology: From Earth to Mars Abbreviated Journal
Volume Issue Pages 594-595
Keywords
Abstract Detecting traversable road areas ahead a moving vehicle is a key process for modern autonomous driving systems. Most existing algorithms use color to classify pixels as road or background. These algorithms reduce the effect of lighting variations and weather conditions by exploiting the discriminant/invariant properties of different color representations. However, up to date, no comparison between these representations have been conducted. Therefore, in this paper, we perform an evaluation of existing color representations for road detection. More specifically, we focus on color planes derived from RGB data and their most com-
mon combinations. The evaluation is done on a set of 7000 road images acquired
using an on-board camera in different real-driving situations.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CVVT:E2M
Notes ADAS;ISE Approved no
Call Number Admin @ si @ AGL2013 Serial 2794
Permanent link to this record
 

 
Author Zeynep Yucel; Albert Ali Salah; Çetin Meriçli; Tekin Meriçli; Roberto Valenti; Theo Gevers
Title Joint Attention by Gaze Interpolation and Saliency Type (up) Journal
Year 2013 Publication IEEE Transactions on cybernetics Abbreviated Journal T-CIBER
Volume 43 Issue 3 Pages 829-842
Keywords
Abstract Joint attention, which is the ability of coordination of a common point of reference with the communicating party, emerges as a key factor in various interaction scenarios. This paper presents an image-based method for establishing joint attention between an experimenter and a robot. The precise analysis of the experimenter's eye region requires stability and high-resolution image acquisition, which is not always available. We investigate regression-based interpolation of the gaze direction from the head pose of the experimenter, which is easier to track. Gaussian process regression and neural networks are contrasted to interpolate the gaze direction. Then, we combine gaze interpolation with image-based saliency to improve the target point estimates and test three different saliency schemes. We demonstrate the proposed method on a human-robot interaction scenario. Cross-subject evaluations, as well as experiments under adverse conditions (such as dimmed or artificial illumination or motion blur), show that our method generalizes well and achieves rapid gaze estimation for establishing joint attention.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 2168-2267 ISBN Medium
Area Expedition Conference
Notes ALTRES;ISE Approved no
Call Number Admin @ si @ YSM2013 Serial 2363
Permanent link to this record
 

 
Author Sergio Escalera
Title Multi-Modal Human Behaviour Analysis from Visual Data Sources Type (up) Journal
Year 2013 Publication ERCIM News journal Abbreviated Journal ERCIM
Volume 95 Issue Pages 21-22
Keywords
Abstract The Human Pose Recovery and Behaviour Analysis group (HuPBA), University of Barcelona, is developing a line of research on multi-modal analysis of humans in visual data. The novel technology is being applied in several scenarios with high social impact, including sign language recognition, assisted technology and supported diagnosis for the elderly and people with mental/physical disabilities, fitness conditioning, and Human Computer Interaction.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0926-4981 ISBN Medium
Area Expedition Conference
Notes HuPBA;MILAB Approved no
Call Number Admin @ si @ Esc2013 Serial 2361
Permanent link to this record
 

 
Author Juan Ramon Terven Salinas; Joaquin Salas; Bogdan Raducanu
Title Estado del Arte en Sistemas de Vision Artificial para Personas Invidentes Type (up) Journal
Year 2013 Publication Komputer Sapiens Abbreviated Journal KS
Volume 1 Issue Pages 20-25
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes OR;MV Approved no
Call Number Admin @ si @ TSR2013 Serial 2231
Permanent link to this record
 

 
Author Jaume Amores
Title Multiple Instance Classification: review, taxonomy and comparative study Type (up) Journal Article
Year 2013 Publication Artificial Intelligence Abbreviated Journal AI
Volume 201 Issue Pages 81-105
Keywords Multi-instance learning; Codebook; Bag-of-Words
Abstract Multiple Instance Learning (MIL) has become an important topic in the pattern recognition community, and many solutions to this problemhave been proposed until now. Despite this fact, there is a lack of comparative studies that shed light into the characteristics and behavior of the different methods. In this work we provide such an analysis focused on the classification task (i.e.,leaving out other learning tasks such as regression). In order to perform our study, we implemented
fourteen methods grouped into three different families. We analyze the performance of the approaches across a variety of well-known databases, and we also study their behavior in synthetic scenarios in order to highlight their characteristics. As a result of this analysis, we conclude that methods that extract global bag-level information show a clearly superior performance in general. In this sense, the analysis permits us to understand why some types of methods are more successful than others, and it permits us to establish guidelines in the design of new MIL
methods.
Address
Corporate Author Thesis
Publisher Elsevier Science Publishers Ltd. Essex, UK Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0004-3702 ISBN Medium
Area Expedition Conference
Notes ADAS; 601.042; 600.057 Approved no
Call Number Admin @ si @ Amo2013 Serial 2273
Permanent link to this record
 

 
Author Jasper Uilings; Koen E.A. van de Sande; Theo Gevers; Arnold Smeulders
Title Selective Search for Object Recognition Type (up) Journal Article
Year 2013 Publication International Journal of Computer Vision Abbreviated Journal IJCV
Volume 104 Issue 2 Pages 154-171
Keywords
Abstract This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 % recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http://disi.unitn.it/~uijlings/SelectiveSearch.html).
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0920-5691 ISBN Medium
Area Expedition Conference
Notes ALTRES;ISE Approved no
Call Number Admin @ si @ USG2013 Serial 2362
Permanent link to this record
 

 
Author Ivan Huerta; Ariel Amato; Xavier Roca; Jordi Gonzalez
Title Exploiting Multiple Cues in Motion Segmentation Based on Background Subtraction Type (up) Journal Article
Year 2013 Publication Neurocomputing Abbreviated Journal NEUCOM
Volume 100 Issue Pages 183–196
Keywords Motion segmentation; Shadow suppression; Colour segmentation; Edge segmentation; Ghost detection; Background subtraction
Abstract This paper presents a novel algorithm for mobile-object segmentation from static background scenes, which is both robust and accurate under most of the common problems found in motionsegmentation. In our first contribution, a case analysis of motionsegmentation errors is presented taking into account the inaccuracies associated with different cues, namely colour, edge and intensity. Our second contribution is an hybrid architecture which copes with the main issues observed in the case analysis by fusing the knowledge from the aforementioned three cues and a temporal difference algorithm. On one hand, we enhance the colour and edge models to solve not only global and local illumination changes (i.e. shadows and highlights) but also the camouflage in intensity. In addition, local information is also exploited to solve the camouflage in chroma. On the other hand, the intensity cue is applied when colour and edge cues are not available because their values are beyond the dynamic range. Additionally, temporal difference scheme is included to segment motion where those three cues cannot be reliably computed, for example in those background regions not visible during the training period. Lastly, our approach is extended for handling ghost detection. The proposed method obtains very accurate and robust motionsegmentation results in multiple indoor and outdoor scenarios, while outperforming the most-referred state-of-art approaches.
Address
Corporate Author Thesis
Publisher Elsevier Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ISE Approved no
Call Number Admin @ si @ HAR2013 Serial 1808
Permanent link to this record
 

 
Author Bhaskar Chakraborty; Andrew Bagdanov; Jordi Gonzalez; Xavier Roca
Title Human Action Recognition Using an Ensemble of Body-Part Detectors Type (up) Journal Article
Year 2013 Publication Expert Systems Abbreviated Journal EXSY
Volume 30 Issue 2 Pages 101-114
Keywords Human action recognition;body-part detection;hidden Markov model
Abstract This paper describes an approach to human action recognition based on a probabilistic optimization model of body parts using hidden Markov model (HMM). Our method is able to distinguish between similar actions by only considering the body parts having major contribution to the actions, for example, legs for walking, jogging and running; arms for boxing, waving and clapping. We apply HMMs to model the stochastic movement of the body parts for action recognition. The HMM construction uses an ensemble of body-part detectors, followed by grouping of part detections, to perform human identification. Three example-based body-part detectors are trained to detect three components of the human body: the head, legs and arms. These detectors cope with viewpoint changes and self-occlusions through the use of ten sub-classifiers that detect body parts over a specific range of viewpoints. Each sub-classifier is a support vector machine trained on features selected for the discriminative power for each particular part/viewpoint combination. Grouping of these detections is performed using a simple geometric constraint model that yields a viewpoint-invariant human detector. We test our approach on three publicly available action datasets: the KTH dataset, Weizmann dataset and HumanEva dataset. Our results illustrate that with a simple and compact representation we can achieve robust recognition of human actions comparable to the most complex, state-of-the-art methods.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ISE Approved no
Call Number Admin @ si @ CBG2013 Serial 1809
Permanent link to this record
 

 
Author Kaida Xiao; Chenyang Fu; D.Mylonas; Dimosthenis Karatzas; S. Wuerger
Title Unique Hue Data for Colour Appearance Models. Part ii: Chromatic Adaptation Transform Type (up) Journal Article
Year 2013 Publication Color Research & Application Abbreviated Journal CRA
Volume 38 Issue 1 Pages 22-29
Keywords
Abstract Unique hue settings of 185 observers under three room-lighting conditions were used to evaluate the accuracy of full and mixed chromatic adaptation transform models of CIECAM02 in terms of unique hue reproduction. Perceptual hue shifts in CIECAM02 were evaluated for both models with no clear difference using the current Commission Internationale de l'Éclairage (CIE) recommendation for mixed chromatic adaptation ratio. Using our large dataset of unique hue data as a benchmark, an optimised parameter is proposed for chromatic adaptation under mixed illumination conditions that produces more accurate results in unique hue reproduction. © 2011 Wiley Periodicals, Inc. Col Res Appl, 2013
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes DAG Approved no
Call Number Admin @ si @ XFM2013 Serial 1822
Permanent link to this record
 

 
Author Albert Gordo; Florent Perronnin; Ernest Valveny
Title Large-scale document image retrieval and classification with runlength histograms and binary embeddings Type (up) Journal Article
Year 2013 Publication Pattern Recognition Abbreviated Journal PR
Volume 46 Issue 7 Pages 1898-1905
Keywords visual document descriptor; compression; large-scale; retrieval; classification
Abstract We present a new document image descriptor based on multi-scale runlength
histograms. This descriptor does not rely on layout analysis and can be
computed efficiently. We show how this descriptor can achieve state-of-theart
results on two very different public datasets in classification and retrieval
tasks. Moreover, we show how we can compress and binarize these descriptors
to make them suitable for large-scale applications. We can achieve state-ofthe-
art results in classification using binary descriptors of as few as 16 to 64
bits.
Address
Corporate Author Thesis
Publisher Elsevier Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0031-3203 ISBN Medium
Area Expedition Conference
Notes DAG; 600.042; 600.045; 605.203 Approved no
Call Number Admin @ si @ GPV2013 Serial 2306
Permanent link to this record
 

 
Author Albert Gordo; Alicia Fornes; Ernest Valveny
Title Writer identification in handwritten musical scores with bags of notes Type (up) Journal Article
Year 2013 Publication Pattern Recognition Abbreviated Journal PR
Volume 46 Issue 5 Pages 1337-1345
Keywords
Abstract Writer Identification is an important task for the automatic processing of documents. However, the identification of the writer in graphical documents is still challenging. In this work, we adapt the Bag of Visual Words framework to the task of writer identification in handwritten musical scores. A vanilla implementation of this method already performs comparably to the state-of-the-art. Furthermore, we analyze the effect of two improvements of the representation: a Bhattacharyya embedding, which improves the results at virtually no extra cost, and a Fisher Vector representation that very significantly improves the results at the cost of a more complex and costly representation. Experimental evaluation shows results more than 20 points above the state-of-the-art in a new, challenging dataset.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0031-3203 ISBN Medium
Area Expedition Conference
Notes DAG Approved no
Call Number Admin @ si @ GFV2013 Serial 2307
Permanent link to this record
 

 
Author Veronica Romero; Alicia Fornes; Nicolas Serrano; Joan Andreu Sanchez; A.H. Toselli; Volkmar Frinken; E. Vidal; Josep Llados
Title The ESPOSALLES database: An ancient marriage license corpus for off-line handwriting recognition Type (up) Journal Article
Year 2013 Publication Pattern Recognition Abbreviated Journal PR
Volume 46 Issue 6 Pages 1658-1669
Keywords
Abstract Historical records of daily activities provide intriguing insights into the life of our ancestors, useful for demography studies and genealogical research. Automatic processing of historical documents, however, has mostly been focused on single works of literature and less on social records, which tend to have a distinct layout, structure, and vocabulary. Such information is usually collected by expert demographers that devote a lot of time to manually transcribe them. This paper presents a new database, compiled from a marriage license books collection, to support research in automatic handwriting recognition for historical documents containing social records. Marriage license books are documents that were used for centuries by ecclesiastical institutions to register marriage licenses. Books from this collection are handwritten and span nearly half a millennium until the beginning of the 20th century. In addition, a study is presented about the capability of state-of-the-art handwritten text recognition systems, when applied to the presented database. Baseline results are reported for reference in future studies.
Address
Corporate Author Thesis
Publisher Elsevier Science Inc. New York, NY, USA Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0031-3203 ISBN Medium
Area Expedition Conference
Notes DAG; 600.045; 602.006; 605.203 Approved no
Call Number Admin @ si @ RFS2013 Serial 2298
Permanent link to this record
 

 
Author Fahad Shahbaz Khan; Muhammad Anwer Rao; Joost Van de Weijer; Andrew Bagdanov; Antonio Lopez; Michael Felsberg
Title Coloring Action Recognition in Still Images Type (up) Journal Article
Year 2013 Publication International Journal of Computer Vision Abbreviated Journal IJCV
Volume 105 Issue 3 Pages 205-221
Keywords
Abstract In this article we investigate the problem of human action recognition in static images. By action recognition we intend a class of problems which includes both action classification and action detection (i.e. simultaneous localization and classification). Bag-of-words image representations yield promising results for action classification, and deformable part models perform very well object detection. The representations for action recognition typically use only shape cues and ignore color information. Inspired by the recent success of color in image classification and object detection, we investigate the potential of color for action classification and detection in static images. We perform a comprehensive evaluation of color descriptors and fusion approaches for action recognition. Experiments were conducted on the three datasets most used for benchmarking action recognition in still images: Willow, PASCAL VOC 2010 and Stanford-40. Our experiments demonstrate that incorporating color information considerably improves recognition performance, and that a descriptor based on color names outperforms pure color descriptors. Our experiments demonstrate that late fusion of color and shape information outperforms other approaches on action recognition. Finally, we show that the different color–shape fusion approaches result in complementary information and combining them yields state-of-the-art performance for action classification.
Address
Corporate Author Thesis
Publisher Springer US Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0920-5691 ISBN Medium
Area Expedition Conference
Notes CIC; ADAS; 600.057; 600.048 Approved no
Call Number Admin @ si @ KRW2013 Serial 2285
Permanent link to this record
 

 
Author Jordi Roca; C. Alejandro Parraga; Maria Vanrell
Title Chromatic settings and the structural color constancy index Type (up) Journal Article
Year 2013 Publication Journal of Vision Abbreviated Journal JV
Volume 13 Issue 4-3 Pages 1-26
Keywords
Abstract Color constancy is usually measured by achromatic setting, asymmetric matching, or color naming paradigms, whose results are interpreted in terms of indexes and models that arguably do not capture the full complexity of the phenomenon. Here we propose a new paradigm, chromatic setting, which allows a more comprehensive characterization of color constancy through the measurement of multiple points in color space under immersive adaptation. We demonstrated its feasibility by assessing the consistency of subjects' responses over time. The paradigm was applied to two-dimensional (2-D) Mondrian stimuli under three different illuminants, and the results were used to fit a set of linear color constancy models. The use of multiple colors improved the precision of more complex linear models compared to the popular diagonal model computed from gray. Our results show that a diagonal plus translation matrix that models mechanisms other than cone gain might be best suited to explain the phenomenon. Additionally, we calculated a number of color constancy indices for several points in color space, and our results suggest that interrelations among colors are not as uniform as previously believed. To account for this variability, we developed a new structural color constancy index that takes into account the magnitude and orientation of the chromatic shift in addition to the interrelations among colors and memory effects.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes CIC; 600.052; 600.051; 605.203 Approved no
Call Number Admin @ si @ RPV2013 Serial 2288
Permanent link to this record
 

 
Author Naila Murray; Maria Vanrell; Xavier Otazu; C. Alejandro Parraga
Title Low-level SpatioChromatic Grouping for Saliency Estimation Type (up) Journal Article
Year 2013 Publication IEEE Transactions on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI
Volume 35 Issue 11 Pages 2810-2816
Keywords
Abstract We propose a saliency model termed SIM (saliency by induction mechanisms), which is based on a low-level spatiochromatic model that has successfully predicted chromatic induction phenomena. In so doing, we hypothesize that the low-level visual mechanisms that enhance or suppress image detail are also responsible for making some image regions more salient. Moreover, SIM adds geometrical grouplets to enhance complex low-level features such as corners, and suppress relatively simpler features such as edges. Since our model has been fitted on psychophysical chromatic induction data, it is largely nonparametric. SIM outperforms state-of-the-art methods in predicting eye fixations on two datasets and using two metrics.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0162-8828 ISBN Medium
Area Expedition Conference
Notes CIC; 600.051; 600.052; 605.203 Approved no
Call Number Admin @ si @ MVO2013 Serial 2289
Permanent link to this record