|   | 
Details
   web
Records
Author (up) Murad Al Haj
Title Looking at Faces: Detection, Tracking and Pose Estimation Type Book Whole
Year 2013 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Humans can effortlessly perceive faces, follow them over space and time, and decode their rich content, such as pose, identity and expression. However, despite many decades of research on automatic facial perception in areas like face detection, expression recognition, pose estimation and face recognition, and despite many successes, a complete solution remains elusive. This thesis is dedicated to three problems in automatic face perception, namely face detection, face tracking and pose estimation.

In face detection, an initial simple model is presented that uses pixel-based heuristics to segment skin locations and hand-crafted rules to determine the locations of the faces present in an image. Different colorspaces are studied to judge whether a colorspace transformation can aid skin color detection. The output of this study is used in the design of a more complex face detector that is able to successfully generalize to different scenarios.

In face tracking, a framework that combines estimation and control in a joint scheme is presented to track a face with a single pan-tilt-zoom camera. While this work is mainly motivated by tracking faces, it can be easily applied atop of any detector to track different objects. The applicability of this method is demonstrated on simulated as well as real-life scenarios.

The last and most important part of this thesis is dedicate to monocular head pose estimation. In this part, a method based on partial least squares (PLS) regression is proposed to estimate pose and solve the alignment problem simultaneously. The contributions of this work are two-fold: 1) demonstrating that the proposed method achieves better than state-of-the-art results on the estimation problem and 2) developing a technique to reduce misalignment based on the learned PLS factors that outperform multiple instance learning (MIL) without the need for any re-training or the inclusion of misaligned samples in the training process, as normally done in MIL.
Address Barcelona
Corporate Author Thesis Ph.D. thesis
Publisher Ediciones Graficas Rey Place of Publication Editor Jordi Gonzalez;Xavier Roca
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ISE Approved no
Call Number Admin @ si @ Haj2013 Serial 2278
Permanent link to this record
 

 
Author (up) Naila Murray; Maria Vanrell; Xavier Otazu; C. Alejandro Parraga
Title Low-level SpatioChromatic Grouping for Saliency Estimation Type Journal Article
Year 2013 Publication IEEE Transactions on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI
Volume 35 Issue 11 Pages 2810-2816
Keywords
Abstract We propose a saliency model termed SIM (saliency by induction mechanisms), which is based on a low-level spatiochromatic model that has successfully predicted chromatic induction phenomena. In so doing, we hypothesize that the low-level visual mechanisms that enhance or suppress image detail are also responsible for making some image regions more salient. Moreover, SIM adds geometrical grouplets to enhance complex low-level features such as corners, and suppress relatively simpler features such as edges. Since our model has been fitted on psychophysical chromatic induction data, it is largely nonparametric. SIM outperforms state-of-the-art methods in predicting eye fixations on two datasets and using two metrics.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0162-8828 ISBN Medium
Area Expedition Conference
Notes CIC; 600.051; 600.052; 605.203 Approved no
Call Number Admin @ si @ MVO2013 Serial 2289
Permanent link to this record
 

 
Author (up) Naveen Onkarappa
Title Optical Flow in Driver Assistance Systems Type Book Whole
Year 2013 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Motion perception is one of the most important attributes of the human brain. Visual motion perception consists in inferring speed and direction of elements in a scene based on visual inputs. Analogously, computer vision is assisted by motion cues in the scene. Motion detection in computer vision is useful in solving problems such as segmentation, depth from motion, structure from motion, compression, navigation and many others. These problems are common in several applications, for instance, video surveillance, robot navigation and advanced driver assistance systems (ADAS). One of the most widely used techniques for motion detection is the optical flow estimation. The work in this thesis attempts to make optical flow suitable for the requirements and conditions of driving scenarios. In this context, a novel space-variant representation called reverse log-polar representation is proposed that is shown to be better than the traditional log-polar space-variant representation for ADAS. The space-variant representations reduce the amount of data to be processed. Another major contribution in this research is related to the analysis of the influence of specific characteristics from driving scenarios on the optical flow accuracy. Characteristics such as vehicle speed and
road texture are considered in the aforementioned analysis. From this study, it is inferred that the regularization weight has to be adapted according to the required error measure and for different speeds and road textures. It is also shown that polar represented optical flow suits driving scenarios where predominant motion is translation. Due to the requirements of such a study and by the lack of needed datasets a new synthetic dataset is presented; it contains: i) sequences of different speeds and road textures in an urban scenario; ii) sequences with complex motion of an on-board camera; and iii) sequences with additional moving vehicles in the scene. The ground-truth optical flow is generated by the ray-tracing technique. Further, few applications of optical flow in ADAS are shown. Firstly, a robust RANSAC based technique to estimate horizon line is proposed. Then, an egomotion estimation is presented to compare the proposed space-variant representation with the classical one. As a final contribution, a modification in the regularization term is proposed that notably improves the results
in the ADAS applications. This adaptation is evaluated using a state of the art optical flow technique. The experiments on a public dataset (KITTI) validate the advantages of using the proposed modification.
Address Bellaterra
Corporate Author Thesis Ph.D. thesis
Publisher Ediciones Graficas Rey Place of Publication Editor Angel Sappa
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-84-940902-1-9 Medium
Area Expedition Conference
Notes ADAS Approved no
Call Number Admin @ si @ Nav2013 Serial 2447
Permanent link to this record
 

 
Author (up) Naveen Onkarappa; Angel Sappa
Title A Novel Space Variant Image Representation Type Journal Article
Year 2013 Publication Journal of Mathematical Imaging and Vision Abbreviated Journal JMIV
Volume 47 Issue 1-2 Pages 48-59
Keywords Space-variant representation; Log-polar mapping; Onboard vision applications
Abstract Traditionally, in machine vision images are represented using cartesian coordinates with uniform sampling along the axes. On the contrary, biological vision systems represent images using polar coordinates with non-uniform sampling. For various advantages provided by space-variant representations many researchers are interested in space-variant computer vision. In this direction the current work proposes a novel and simple space variant representation of images. The proposed representation is compared with the classical log-polar mapping. The log-polar representation is motivated by biological vision having the characteristic of higher resolution at the fovea and reduced resolution at the periphery. On the contrary to the log-polar, the proposed new representation has higher resolution at the periphery and lower resolution at the fovea. Our proposal is proved to be a better representation in navigational scenarios such as driver assistance systems and robotics. The experimental results involve analysis of optical flow fields computed on both proposed and log-polar representations. Additionally, an egomotion estimation application is also shown as an illustrative example. The experimental analysis comprises results from synthetic as well as real sequences.
Address
Corporate Author Thesis
Publisher Springer US Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0924-9907 ISBN Medium
Area Expedition Conference
Notes ADAS; 600.055; 605.203; 601.215 Approved no
Call Number Admin @ si @ OnS2013a Serial 2243
Permanent link to this record
 

 
Author (up) Naveen Onkarappa; Angel Sappa
Title Laplacian Derivative based Regularization for Optical Flow Estimation in Driving Scenario Type Conference Article
Year 2013 Publication 15th International Conference on Computer Analysis of Images and Patterns Abbreviated Journal
Volume 8048 Issue Pages 483-490
Keywords Optical flow; regularization; Driver Assistance Systems; Performance Evaluation
Abstract Existing state of the art optical flow approaches, which are evaluated on standard datasets such as Middlebury, not necessarily have a similar performance when evaluated on driving scenarios. This drop on performance is due to several challenges arising on real scenarios during driving. Towards this direction, in this paper, we propose a modification to the regularization term in a variational optical flow formulation, that notably improves the results, specially in driving scenarios. The proposed modification consists on using the Laplacian derivatives of flow components in the regularization term instead of gradients of flow components. We show the improvements in results on a standard real image sequences dataset (KITTI).
Address York; UK; August 2013
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN 0302-9743 ISBN 978-3-642-40245-6 Medium
Area Expedition Conference CAIP
Notes ADAS; 600.055; 601.215 Approved no
Call Number Admin @ si @ OnS2013b Serial 2244
Permanent link to this record
 

 
Author (up) Nuria Cirera; Alicia Fornes; Volkmar Frinken; Josep Llados
Title Hybrid grammar language model for handwritten historical documents recognition Type Conference Article
Year 2013 Publication 6th Iberian Conference on Pattern Recognition and Image Analysis Abbreviated Journal
Volume 7887 Issue Pages 117-124
Keywords
Abstract In this paper we present a hybrid language model for the recognition of handwritten historical documents with a structured syntactical layout. Using a hidden Markov model-based recognition framework, a word-based grammar with a closed dictionary is enhanced by a character sequence recognition method. This allows to recognize out-of-dictionary words in controlled parts of the recognition, while keeping a closed vocabulary restriction for other parts. While the current status is work in progress, we can report an improvement in terms of character error rate.
Address Madeira; Portugal; June 2013
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN 0302-9743 ISBN 978-3-642-38627-5 Medium
Area Expedition Conference IbPRIA
Notes DAG; 602.006; 600.045; 600.061 Approved no
Call Number Admin @ si @ CFF2013 Serial 2292
Permanent link to this record
 

 
Author (up) Olivier Penacchio; Xavier Otazu; Laura Dempere-Marco
Title A Neurodynamical Model of Brightness Induction in V1 Type Journal Article
Year 2013 Publication PloS ONE Abbreviated Journal Plos
Volume 8 Issue 5 Pages e64086
Keywords
Abstract Brightness induction is the modulation of the perceived intensity of an area by the luminance of surrounding areas. Recent neurophysiological evidence suggests that brightness information might be explicitly represented in V1, in contrast to the more common assumption that the striate cortex is an area mostly responsive to sensory information. Here we investigate possible neural mechanisms that offer a plausible explanation for such phenomenon. To this end, a neurodynamical model which is based on neurophysiological evidence and focuses on the part of V1 responsible for contextual influences is presented. The proposed computational model successfully accounts for well known psychophysical effects for static contexts and also for brightness induction in dynamic contexts defined by modulating the luminance of surrounding areas. This work suggests that intra-cortical interactions in V1 could, at least partially, explain brightness induction effects and reveals how a common general architecture may account for several different fundamental processes, such as visual saliency and brightness induction, which emerge early in the visual processing pathway.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes CIC Approved no
Call Number Admin @ si @ POD2013 Serial 2242
Permanent link to this record
 

 
Author (up) Onur Ferhat; Fernando Vilariño
Title A Cheap Portable Eye-Tracker Solution for Common Setups Type Conference Article
Year 2013 Publication 17th European Conference on Eye Movements Abbreviated Journal
Volume Issue Pages
Keywords Low cost; eye-tracker; software; webcam; Raspberry Pi
Abstract We analyze the feasibility of a cheap eye-tracker where the hardware consists of a single webcam and a Raspberry Pi device. Our aim is to discover the limits of such a system and to see whether it provides an acceptable performance. We base our work on the open source Opengazer (Zielinski, 2013) and we propose several improvements to create a robust, real-time system. After assessing the accuracy of our eye-tracker in elaborated experiments involving 18 subjects under 4 different system setups, we developed a simple game to see how it performs in practice and we also installed it on a Raspberry Pi to create a portable stand-alone eye-tracker which achieves 1.62° horizontal accuracy with 3 fps refresh rate for a building cost of 70 Euros.
Address Lund; Sweden; August 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ECEM
Notes MV;SIAI Approved no
Call Number Admin @ si @ FeV2013 Serial 2374
Permanent link to this record
 

 
Author (up) Patricia Marquez; Debora Gil; Aura Hernandez-Sabate
Title Evaluation of the Capabilities of Confidence Measures for Assessing Optical Flow Quality Type Conference Article
Year 2013 Publication ICCV Workshop on Computer Vision in Vehicle Technology: From Earth to Mars Abbreviated Journal
Volume Issue Pages 624-631
Keywords
Abstract Assessing Optical Flow (OF) quality is essential for its further use in reliable decision support systems. The absence of ground truth in such situations leads to the computation of OF Confidence Measures (CM) obtained from either input or output data. A fair comparison across the capabilities of the different CM for bounding OF error is required in order to choose the best OF-CM pair for discarding points where OF computation is not reliable. This paper presents a statistical probabilistic framework for assessing the quality of a given CM. Our quality measure is given in terms of the percentage of pixels whose OF error bound can not be determined by CM values. We also provide statistical tools for the computation of CM values that ensures a given accuracy of the flow field.
Address Sydney; Australia; December 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CVTT:E2M
Notes IAM; ADAS; 600.044; 600.057; 601.145 Approved no
Call Number Admin @ si @ MGH2013b Serial 2351
Permanent link to this record
 

 
Author (up) Patricia Marquez; Debora Gil; Aura Hernandez-Sabate; Daniel Kondermann
Title When Is A Confidence Measure Good Enough? Type Conference Article
Year 2013 Publication 9th International Conference on Computer Vision Systems Abbreviated Journal
Volume 7963 Issue Pages 344-353
Keywords Optical flow, confidence measure, performance evaluation
Abstract Confidence estimation has recently become a hot topic in image processing and computer vision.Yet, several definitions exist of the term “confidence” which are sometimes used interchangeably. This is a position paper, in which we aim to give an overview on existing definitions,
thereby clarifying the meaning of the used terms to facilitate further research in this field. Based on these clarifications, we develop a theory to compare confidence measures with respect to their quality.
Address St Petersburg; Russia; July 2013
Corporate Author Thesis
Publisher Springer Link Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN 0302-9743 ISBN 978-3-642-39401-0 Medium
Area Expedition Conference ICVS
Notes IAM;ADAS; 600.044; 600.057; 600.060; 601.145 Approved no
Call Number IAM @ iam @ MGH2013a Serial 2218
Permanent link to this record
 

 
Author (up) R. Bertrand; P. Gomez-Krämer; Oriol Ramos Terrades; P. Franco; Jean-Marc Ogier
Title A System Based On Intrinsic Features for Fraudulent Document Detection Type Conference Article
Year 2013 Publication 12th International Conference on Document Analysis and Recognition Abbreviated Journal
Volume Issue Pages 106-110
Keywords paper document; document analysis; fraudulent document; forgery; fake
Abstract Paper documents still represent a large amount of information supports used nowadays and may contain critical data. Even though official documents are secured with techniques such as printed patterns or artwork, paper documents suffer froma lack of security.
However, the high availability of cheap scanning and printing hardware allows non-experts to easily create fake documents. As the use of a watermarking system added during the document production step is hardly possible, solutions have to be proposed to distinguish a genuine document from a forged one.
In this paper, we present an automatic forgery detection method based on document’s intrinsic features at character level. This method is based on the one hand on outlier character detection in a discriminant feature space and on the other hand on the detection of strictly similar characters. Therefore, a feature set iscomputed for all characters. Then, based on a distance between characters of the same class.
Address Washington; USA; August 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1520-5363 ISBN Medium
Area Expedition Conference ICDAR
Notes DAG; 600.061 Approved no
Call Number Admin @ si @ BGR2013a Serial 2332
Permanent link to this record
 

 
Author (up) Rahat Khan; Joost Van de Weijer; Dimosthenis Karatzas; Damien Muselet
Title Towards multispectral data acquisition with hand-held devices Type Conference Article
Year 2013 Publication 20th IEEE International Conference on Image Processing Abbreviated Journal
Volume Issue Pages 2053 - 2057
Keywords Multispectral; mobile devices; color measurements
Abstract We propose a method to acquire multispectral data with handheld devices with front-mounted RGB cameras. We propose to use the display of the device as an illuminant while the camera captures images illuminated by the red, green and
blue primaries of the display. Three illuminants and three response functions of the camera lead to nine response values which are used for reflectance estimation. Results are promising and show that the accuracy of the spectral reconstruction improves in the range from 30-40% over the spectral
reconstruction based on a single illuminant. Furthermore, we propose to compute sensor-illuminant aware linear basis by discarding the part of the reflectances that falls in the sensorilluminant null-space. We show experimentally that optimizing reflectance estimation on these new basis functions decreases
the RMSE significantly over basis functions that are independent to sensor-illuminant. We conclude that, multispectral data acquisition is potentially possible with consumer hand-held devices such as tablets, mobiles, and laptops, opening up applications which are currently considered to be unrealistic.
Address Melbourne; Australia; September 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICIP
Notes CIC; DAG; 600.048 Approved no
Call Number Admin @ si @ KWK2013b Serial 2265
Permanent link to this record
 

 
Author (up) Rahat Khan; Joost Van de Weijer; Fahad Shahbaz Khan; Damien Muselet; christophe Ducottet; Cecile Barat
Title Discriminative Color Descriptors Type Conference Article
Year 2013 Publication IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal
Volume Issue Pages 2866 - 2873
Keywords
Abstract Color description is a challenging task because of large variations in RGB values which occur due to scene accidental events, such as shadows, shading, specularities, illuminant color changes, and changes in viewing geometry. Traditionally, this challenge has been addressed by capturing the variations in physics-based models, and deriving invariants for the undesired variations. The drawback of this approach is that sets of distinguishable colors in the original color space are mapped to the same value in the photometric invariant space. This results in a drop of discriminative power of the color description. In this paper we take an information theoretic approach to color description. We cluster color values together based on their discriminative power in a classification problem. The clustering has the explicit objective to minimize the drop of mutual information of the final representation. We show that such a color description automatically learns a certain degree of photometric invariance. We also show that a universal color representation, which is based on other data sets than the one at hand, can obtain competing performance. Experiments show that the proposed descriptor outperforms existing photometric invariants. Furthermore, we show that combined with shape description these color descriptors obtain excellent results on four challenging datasets, namely, PASCAL VOC 2007, Flowers-102, Stanford dogs-120 and Birds-200.
Address Portland; Oregon; June 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1063-6919 ISBN Medium
Area Expedition Conference CVPR
Notes CIC; 600.048 Approved no
Call Number Admin @ si @ KWK2013a Serial 2262
Permanent link to this record
 

 
Author (up) S.Grau; Ana Puig; Sergio Escalera; Maria Salamo
Title Intelligent Interactive Volume Classification Type Conference Article
Year 2013 Publication Pacific Graphics Abbreviated Journal
Volume 32 Issue 7 Pages 23-28
Keywords
Abstract This paper defines an intelligent and interactive framework to classify multiple regions of interest from the original data on demand, without requiring any preprocessing or previous segmentation. The proposed intelligent and interactive approach is divided in three stages: visualize, training and testing. First, users visualize and label some samples directly on slices of the volume. Training and testing are based on a framework of Error Correcting Output Codes and Adaboost classifiers that learn to classify each region the user has painted. Later, at the testing stage, each classifier is directly applied on the rest of samples and combined to perform multi-class labeling, being used in the final rendering. We also parallelized the training stage using a GPU-based implementation for
obtaining a rapid interaction and classification.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-3-905674-50-7 Medium
Area Expedition Conference PG
Notes HuPBA; 600.046;MILAB Approved no
Call Number Admin @ si @ GPE2013b Serial 2355
Permanent link to this record
 

 
Author (up) S.Grau; Anna Puig; Sergio Escalera; Maria Salamo; Oscar Amoros
Title Efficient complementary viewpoint selection in volume rendering Type Conference Article
Year 2013 Publication 21st WSCG Conference on Computer Graphics, Abbreviated Journal
Volume Issue Pages
Keywords Dual camera; Visualization; Interactive Interfaces; Dynamic Time Warping.
Abstract A major goal of visualization is to appropriately express knowledge of scientific data. Generally, gathering visual information contained in the volume data often requires a lot of expertise from the final user to setup the parameters of the visualization. One way of alleviating this problem is to provide the position of inner structures with different viewpoint locations to enhance the perception and construction of the mental image. To this end, traditional illustrations use two or three different views of the regions of interest. Similarly, with the aim of assisting the users to easily place a good viewpoint location, this paper proposes an automatic and interactive method that locates different complementary viewpoints from a reference camera in volume datasets. Specifically, the proposed method combines the quantity of information each camera provides for each structure and the shape similarity of the projections of the remaining viewpoints based on Dynamic Time Warping. The selected complementary viewpoints allow a better understanding of the focused structure in several applications. Thus, the user interactively receives feedback based on several viewpoints that helps him to understand the visual information. A live-user evaluation on different data sets show a good convergence to useful complementary viewpoints.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-808694374-9 Medium
Area Expedition Conference WSCG
Notes HuPBA; 600.046;MILAB Approved no
Call Number Admin @ si @ GPE2013a Serial 2255
Permanent link to this record