|   | 
Details
   web
Records
Author Xavier Otazu; C. Alejandro Parraga; Maria Vanrell
Title Towards a unified chromatic inducction model Type Journal Article
Year 2010 Publication Journal of Vision Abbreviated Journal (down) VSS
Volume 10 Issue 12:5 Pages 1-24
Keywords Visual system; Color induction; Wavelet transform
Abstract In a previous work (X. Otazu, M. Vanrell, & C. A. Párraga, 2008b), we showed how several brightness induction effects can be predicted using a simple multiresolution wavelet model (BIWaM). Here we present a new model for chromatic induction processes (termed Chromatic Induction Wavelet Model or CIWaM), which is also implemented on a multiresolution framework and based on similar assumptions related to the spatial frequency and the contrast surround energy of the stimulus. The CIWaM can be interpreted as a very simple extension of the BIWaM to the chromatic channels, which in our case are defined in the MacLeod-Boynton (lsY) color space. This new model allows us to unify both chromatic assimilation and chromatic contrast effects in a single mathematical formulation. The predictions of the CIWaM were tested by means of several color and brightness induction experiments, which showed an acceptable agreement between model predictions and psychophysical data.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes CIC Approved no
Call Number CAT @ cat @ OPV2010 Serial 1450
Permanent link to this record
 

 
Author C. Alejandro Parraga; Jordi Roca; Maria Vanrell
Title Do Basic Colors Influence Chromatic Adaptation? Type Journal Article
Year 2011 Publication Journal of Vision Abbreviated Journal (down) VSS
Volume 11 Issue 11 Pages 85
Keywords
Abstract Color constancy (the ability to perceive colors relatively stable under different illuminants) is the result of several mechanisms spread across different neural levels and responding to several visual scene cues. It is usually measured by estimating the perceived color of a grey patch under an illuminant change. In this work, we hypothesize whether chromatic adaptation (without a reference white or grey) could be driven by certain colors, specifically those corresponding to the universal color terms proposed by Berlin and Kay (1969). To this end we have developed a new psychophysical paradigm in which subjects adjust the color of a test patch (in CIELab space) to match their memory of the best example of a given color chosen from the universal terms list (grey, red, green, blue, yellow, purple, pink, orange and brown). The test patch is embedded inside a Mondrian image and presented on a calibrated CRT screen inside a dark cabin. All subjects were trained to “recall” their most exemplary colors reliably from memory and asked to always produce the same basic colors when required under several adaptation conditions. These include achromatic and colored Mondrian backgrounds, under a simulated D65 illuminant and several colored illuminants. A set of basic colors were measured for each subject under neutral conditions (achromatic background and D65 illuminant) and used as “reference” for the rest of the experiment. The colors adjusted by the subjects in each adaptation condition were compared to the reference colors under the corresponding illuminant and a “constancy index” was obtained for each of them. Our results show that for some colors the constancy index was better than for grey. The set of best adapted colors in each condition were common to a majority of subjects and were dependent on the chromaticity of the illuminant and the chromatic background considered.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1534-7362 ISBN Medium
Area Expedition Conference
Notes CIC Approved no
Call Number Admin @ si @ PRV2011 Serial 1759
Permanent link to this record
 

 
Author Javier Vazquez; J. Kevin O'Regan; Maria Vanrell; Graham D. Finlayson
Title A new spectrally sharpened basis to predict colour naming, unique hues, and hue cancellation Type Journal Article
Year 2012 Publication Journal of Vision Abbreviated Journal (down) VSS
Volume 12 Issue 6 (7) Pages 1-14
Keywords
Abstract When light is reflected off a surface, there is a linear relation between the three human photoreceptor responses to the incoming light and the three photoreceptor responses to the reflected light. Different colored surfaces have different linear relations. Recently, Philipona and O'Regan (2006) showed that when this relation is singular in a mathematical sense, then the surface is perceived as having a highly nameable color. Furthermore, white light reflected by that surface is perceived as corresponding precisely to one of the four psychophysically measured unique hues. However, Philipona and O'Regan's approach seems unrelated to classical psychophysical models of color constancy. In this paper we make this link. We begin by transforming cone sensors to spectrally sharpened counterparts. In sharp color space, illumination change can be modeled by simple von Kries type scalings of response values within each of the spectrally sharpened response channels. In this space, Philipona and O'Regan's linear relation is captured by a simple Land-type color designator defined by dividing reflected light by incident light. This link between Philipona and O'Regan's theory and Land's notion of color designator gives the model biological plausibility. We then show that Philipona and O'Regan's singular surfaces are surfaces which are very close to activating only one or only two of such newly defined spectrally sharpened sensors, instead of the usual three. Closeness to zero is quantified in a new simplified measure of singularity which is also shown to relate to the chromaticness of colors. As in Philipona and O'Regan's original work, our new theory accounts for a large variety of psychophysical color data.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes CIC Approved no
Call Number Admin @ si @ VOV2012 Serial 1998
Permanent link to this record
 

 
Author T.Chauhan; E.Perales; Kaida Xiao; E.Hird ; Dimosthenis Karatzas; Sophie Wuerger
Title The achromatic locus: Effect of navigation direction in color space Type Journal Article
Year 2014 Publication Journal of Vision Abbreviated Journal (down) VSS
Volume 14 (1) Issue 25 Pages 1-11
Keywords achromatic; unique hues; color constancy; luminance; color space
Abstract 5Y Impact Factor: 2.99 / 1st (Ophthalmology)
An achromatic stimulus is defined as a patch of light that is devoid of any hue. This is usually achieved by asking observers to adjust the stimulus such that it looks neither red nor green and at the same time neither yellow nor blue. Despite the theoretical and practical importance of the achromatic locus, little is known about the variability in these settings. The main purpose of the current study was to evaluate whether achromatic settings were dependent on the task of the observers, namely the navigation direction in color space. Observers could either adjust the test patch along the two chromatic axes in the CIE u*v* diagram or, alternatively, navigate along the unique-hue lines. Our main result is that the navigation method affects the reliability of these achromatic settings. Observers are able to make more reliable achromatic settings when adjusting the test patch along the directions defined by the four unique hues as opposed to navigating along the main axes in the commonly used CIE u*v* chromaticity plane. This result holds across different ambient viewing conditions (Dark, Daylight, Cool White Fluorescent) and different test luminance levels (5, 20, and 50 cd/m2). The reduced variability in the achromatic settings is consistent with the idea that internal color representations are more aligned with the unique-hue lines than the u* and v* axes.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes DAG; 600.077 Approved no
Call Number Admin @ si @ CPX2014 Serial 2418
Permanent link to this record
 

 
Author Mariella Dimiccoli
Title Figure-ground segregation: A fully nonlocal approach Type Journal Article
Year 2016 Publication Vision Research Abbreviated Journal (down) VR
Volume 126 Issue Pages 308-317
Keywords Figure-ground segregation; Nonlocal approach; Directional linear voting; Nonlinear diffusion
Abstract We present a computational model that computes and integrates in a nonlocal fashion several configural cues for automatic figure-ground segregation. Our working hypothesis is that the figural status of each pixel is a nonlocal function of several geometric shape properties and it can be estimated without explicitly relying on object boundaries. The methodology is grounded on two elements: multi-directional linear voting and nonlinear diffusion. A first estimation of the figural status of each pixel is obtained as a result of a voting process, in which several differently oriented line-shaped neighborhoods vote to express their belief about the figural status of the pixel. A nonlinear diffusion process is then applied to enforce the coherence of figural status estimates among perceptually homogeneous regions. Computer simulations fit human perception and match the experimental evidence that several cues cooperate in defining figure-ground segregation. The results of this work suggest that figure-ground segregation involves feedback from cells with larger receptive fields in higher visual cortical areas.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB; Approved no
Call Number Admin @ si @ Dim2016b Serial 2623
Permanent link to this record
 

 
Author Ivet Rafegas; Maria Vanrell
Title Color encoding in biologically-inspired convolutional neural networks Type Journal Article
Year 2018 Publication Vision Research Abbreviated Journal (down) VR
Volume 151 Issue Pages 7-17
Keywords Color coding; Computer vision; Deep learning; Convolutional neural networks
Abstract Convolutional Neural Networks have been proposed as suitable frameworks to model biological vision. Some of these artificial networks showed representational properties that rival primate performances in object recognition. In this paper we explore how color is encoded in a trained artificial network. It is performed by estimating a color selectivity index for each neuron, which allows us to describe the neuron activity to a color input stimuli. The index allows us to classify whether they are color selective or not and if they are of a single or double color. We have determined that all five convolutional layers of the network have a large number of color selective neurons. Color opponency clearly emerges in the first layer, presenting 4 main axes (Black-White, Red-Cyan, Blue-Yellow and Magenta-Green), but this is reduced and rotated as we go deeper into the network. In layer 2 we find a denser hue sampling of color neurons and opponency is reduced almost to one new main axis, the Bluish-Orangish coinciding with the dataset bias. In layers 3, 4 and 5 color neurons are similar amongst themselves, presenting different type of neurons that detect specific colored objects (e.g., orangish faces), specific surrounds (e.g., blue sky) or specific colored or contrasted object-surround configurations (e.g. blue blob in a green surround). Overall, our work concludes that color and shape representation are successively entangled through all the layers of the studied network, revealing certain parallelisms with the reported evidences in primate brains that can provide useful insight into intermediate hierarchical spatio-chromatic representations.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes CIC; 600.051; 600.087 Approved no
Call Number Admin @ si @RaV2018 Serial 3114
Permanent link to this record
 

 
Author David Berga; Xose R. Fernandez-Vidal; Xavier Otazu; V. Leboran; Xose M. Pardo
Title Psychophysical evaluation of individual low-level feature influences on visual attention Type Journal Article
Year 2019 Publication Vision Research Abbreviated Journal (down) VR
Volume 154 Issue Pages 60-79
Keywords Visual attention; Psychophysics; Saliency; Task; Context; Contrast; Center bias; Low-level; Synthetic; Dataset
Abstract In this study we provide the analysis of eye movement behavior elicited by low-level feature distinctiveness with a dataset of synthetically-generated image patterns. Design of visual stimuli was inspired by the ones used in previous psychophysical experiments, namely in free-viewing and visual searching tasks, to provide a total of 15 types of stimuli, divided according to the task and feature to be analyzed. Our interest is to analyze the influences of low-level feature contrast between a salient region and the rest of distractors, providing fixation localization characteristics and reaction time of landing inside the salient region. Eye-tracking data was collected from 34 participants during the viewing of a 230 images dataset. Results show that saliency is predominantly and distinctively influenced by: 1. feature type, 2. feature contrast, 3. temporality of fixations, 4. task difficulty and 5. center bias. This experimentation proposes a new psychophysical basis for saliency model evaluation using synthetic images.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes NEUROBIT; 600.128; 600.120 Approved no
Call Number Admin @ si @ BFO2019a Serial 3274
Permanent link to this record
 

 
Author Simone Balocco; Carlo Gatta; Oriol Pujol; J. Mauri; Petia Radeva
Title SRBF: Speckle Reducing Bilateral Filtering Type Journal Article
Year 2010 Publication Ultrasound in Medicine and Biology Abbreviated Journal (down) UMB
Volume 36 Issue 8 Pages 1353-1363
Keywords
Abstract Speckle noise negatively affects medical ultrasound image shape interpretation and boundary detection. Speckle removal filters are widely used to selectively remove speckle noise without destroying important image features to enhance object boundaries. In this article, a fully automatic bilateral filter tailored to ultrasound images is proposed. The edge preservation property is obtained by embedding noise statistics in the filter framework. Consequently, the filter is able to tackle the multiplicative behavior modulating the smoothing strength with respect to local statistics. The in silico experiments clearly showed that the speckle reducing bilateral filter (SRBF) has superior performances to most of the state of the art filtering methods. The filter is tested on 50 in vivo US images and its influence on a segmentation task is quantified. The results using SRBF filtered data sets show a superior performance to using oriented anisotropic diffusion filtered images. This improvement is due to the adaptive support of SRBF and the embedded noise statistics, yielding a more homogeneous smoothing. SRBF results in a fully automatic, fast and flexible algorithm potentially suitable in wide ranges of speckle noise sizes, for different medical applications (IVUS, B-mode, 3-D matrix array US).
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB;HUPBA Approved no
Call Number BCNPCL @ bcnpcl @ BGP2010 Serial 1314
Permanent link to this record
 

 
Author Marina Alberti; Simone Balocco; Xavier Carrillo; J. Mauri; Petia Radeva
Title Automatic non-rigid temporal alignment of IVUS sequences: method and quantitative validation Type Journal Article
Year 2013 Publication Ultrasound in Medicine and Biology Abbreviated Journal (down) UMB
Volume 39 Issue 9 Pages 1698-712
Keywords Intravascular ultrasound; Dynamic time warping; Non-rigid alignment; Sequence matching; Partial overlapping strategy
Abstract Clinical studies on atherosclerosis regression/progression performed by intravascular ultrasound analysis would benefit from accurate alignment of sequences of the same patient before and after clinical interventions and at follow-up. In this article, a methodology for automatic alignment of intravascular ultrasound sequences based on the dynamic time warping technique is proposed. The non-rigid alignment is adapted to the specific task by applying it to multidimensional signals describing the morphologic content of the vessel. Moreover, dynamic time warping is embedded into a framework comprising a strategy to address partial overlapping between acquisitions and a term that regularizes non-physiologic temporal compression/expansion of the sequences. Extensive validation is performed on both synthetic and in vivo data. The proposed method reaches alignment errors of approximately 0.43 mm for pairs of sequences acquired during the same intervention phase and 0.77 mm for pairs of sequences acquired at successive intervention stages.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB Approved no
Call Number Admin @ si @ ABC2013 Serial 2313
Permanent link to this record
 

 
Author G. Zahnd; Simone Balocco; A. Serusclat; P. Moulin; M. Orkisz; D. Vray
Title Progressive attenuation of the longitudinal kinetics in the common carotid artery: preliminary in vivo assessment Ultrasound in Medicine and Biology Type Journal Article
Year 2015 Publication Ultrasound in Medicine and Biology Abbreviated Journal (down) UMB
Volume 41 Issue 1 Pages 339-345
Keywords Arterial stiffness; Atherosclerosis; Common carotid artery; Longitudinal kinetics; Motion tracking; Ultrasound imaging
Abstract Longitudinal kinetics (LOKI) of the arterial wall consists of the shearing motion of the intima-media complex over the adventitia layer in the direction parallel to the blood flow during the cardiac cycle. The aim of this study was to investigate the local variability of LOKI amplitude along the length of the vessel. By use of a previously validated motion-estimation framework, 35 in vivo longitudinal B-mode ultrasound cine loops of healthy common carotid arteries were analyzed. Results indicated that LOKI amplitude is progressively attenuated along the length of the artery, as it is larger in regions located on the proximal side of the image (i.e., toward the heart) and smaller in regions located on the distal side of the image (i.e., toward the head), with an average attenuation coefficient of -2.5 ± 2.0%/mm. Reported for the first time in this study, this phenomenon is likely to be of great importance in improving understanding of atherosclerosis mechanisms, and has the potential to be a novel index of arterial stiffness.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB Approved no
Call Number Admin @ si @ ZBS2014 Serial 2556
Permanent link to this record
 

 
Author David Masip; Agata Lapedriza; Jordi Vitria
Title Boosted Online Learning for Face Recognition Type Journal Article
Year 2009 Publication IEEE Transactions on Systems, Man and Cybernetics part B Abbreviated Journal (down) TSMCB
Volume 39 Issue 2 Pages 530–538
Keywords
Abstract Face recognition applications commonly suffer from three main drawbacks: a reduced training set, information lying in high-dimensional subspaces, and the need to incorporate new people to recognize. In the recent literature, the extension of a face classifier in order to include new people in the model has been solved using online feature extraction techniques. The most successful approaches of those are the extensions of the principal component analysis or the linear discriminant analysis. In the current paper, a new online boosting algorithm is introduced: a face recognition method that extends a boosting-based classifier by adding new classes while avoiding the need of retraining the classifier each time a new person joins the system. The classifier is learned using the multitask learning principle where multiple verification tasks are trained together sharing the same feature space. The new classes are added taking advantage of the structure learned previously, being the addition of new classes not computationally demanding. The present proposal has been (experimentally) validated with two different facial data sets by comparing our approach with the current state-of-the-art techniques. The results show that the proposed online boosting algorithm fares better in terms of final accuracy. In addition, the global performance does not decrease drastically even when the number of classes of the base problem is multiplied by eight.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1083–4419 ISBN Medium
Area Expedition Conference
Notes OR;MV Approved no
Call Number BCNPCL @ bcnpcl @ MLV2009 Serial 1155
Permanent link to this record
 

 
Author Fadi Dornaika; Bogdan Raducanu
Title Three-Dimensional Face Pose Detection and Tracking Using Monocular Videos: Tool and Application Type Journal Article
Year 2009 Publication IEEE Transactions on Systems, Man and Cybernetics part B Abbreviated Journal (down) TSMCB
Volume 39 Issue 4 Pages 935–944
Keywords
Abstract Recently, we have proposed a real-time tracker that simultaneously tracks the 3-D head pose and facial actions in monocular video sequences that can be provided by low quality cameras. This paper has two main contributions. First, we propose an automatic 3-D face pose initialization scheme for the real-time tracker by adopting a 2-D face detector and an eigenface system. Second, we use the proposed methods-the initialization and tracking-for enhancing the human-machine interaction functionality of an AIBO robot. More precisely, we show how the orientation of the robot's camera (or any active vision system) can be controlled through the estimation of the user's head pose. Applications based on head-pose imitation such as telepresence, virtual reality, and video games can directly exploit the proposed techniques. Experiments on real videos confirm the robustness and usefulness of the proposed methods.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes OR;MV Approved no
Call Number BCNPCL @ bcnpcl @ DoR2009a Serial 1218
Permanent link to this record
 

 
Author Sergio Escalera; Alicia Fornes; Oriol Pujol; Josep Llados; Petia Radeva
Title Circular Blurred Shape Model for Multiclass Symbol Recognition Type Journal Article
Year 2011 Publication IEEE Transactions on Systems, Man and Cybernetics (Part B) (IEEE) Abbreviated Journal (down) TSMCB
Volume 41 Issue 2 Pages 497-506
Keywords
Abstract In this paper, we propose a circular blurred shape model descriptor to deal with the problem of symbol detection and classification as a particular case of object recognition. The feature extraction is performed by capturing the spatial arrangement of significant object characteristics in a correlogram structure. The shape information from objects is shared among correlogram regions, where a prior blurring degree defines the level of distortion allowed in the symbol, making the descriptor tolerant to irregular deformations. Moreover, the descriptor is rotation invariant by definition. We validate the effectiveness of the proposed descriptor in both the multiclass symbol recognition and symbol detection domains. In order to perform the symbol detection, the descriptors are learned using a cascade of classifiers. In the case of multiclass categorization, the new feature space is learned using a set of binary classifiers which are embedded in an error-correcting output code design. The results over four symbol data sets show the significant improvements of the proposed descriptor compared to the state-of-the-art descriptors. In particular, the results are even more significant in those cases where the symbols suffer from elastic deformations.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1083-4419 ISBN Medium
Area Expedition Conference
Notes MILAB; DAG;HuPBA Approved no
Call Number Admin @ si @ EFP2011 Serial 1784
Permanent link to this record
 

 
Author Javier Marin; David Vazquez; Antonio Lopez; Jaume Amores; Ludmila I. Kuncheva
Title Occlusion handling via random subspace classifiers for human detection Type Journal Article
Year 2014 Publication IEEE Transactions on Systems, Man, and Cybernetics (Part B) Abbreviated Journal (down) TSMCB
Volume 44 Issue 3 Pages 342-354
Keywords Pedestriand Detection; occlusion handling
Abstract This paper describes a general method to address partial occlusions for human detection in still images. The Random Subspace Method (RSM) is chosen for building a classifier ensemble robust against partial occlusions. The component classifiers are chosen on the basis of their individual and combined performance. The main contribution of this work lies in our approach’s capability to improve the detection rate when partial occlusions are present without compromising the detection performance on non occluded data. In contrast to many recent approaches, we propose a method which does not require manual labelling of body parts, defining any semantic spatial components, or using additional data coming from motion or stereo. Moreover, the method can be easily extended to other object classes. The experiments are performed on three large datasets: the INRIA person dataset, the Daimler Multicue dataset, and a new challenging dataset, called PobleSec, in which a considerable number of targets are partially occluded. The different approaches are evaluated at the classification and detection levels for both partially occluded and non-occluded data. The experimental results show that our detector outperforms state-of-the-art approaches in the presence of partial occlusions, while offering performance and reliability similar to those of the holistic approach on non-occluded data. The datasets used in our experiments have been made publicly available for benchmarking purposes
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 2168-2267 ISBN Medium
Area Expedition Conference
Notes ADAS; 605.203; 600.057; 600.054; 601.042; 601.187; 600.076 Approved no
Call Number ADAS @ adas @ MVL2014 Serial 2213
Permanent link to this record
 

 
Author Oscar Lopes; Miguel Reyes; Sergio Escalera; Jordi Gonzalez
Title Spherical Blurred Shape Model for 3-D Object and Pose Recognition: Quantitative Analysis and HCI Applications in Smart Environments Type Journal Article
Year 2014 Publication IEEE Transactions on Systems, Man and Cybernetics (Part B) Abbreviated Journal (down) TSMCB
Volume 44 Issue 12 Pages 2379-2390
Keywords
Abstract The use of depth maps is of increasing interest after the advent of cheap multisensor devices based on structured light, such as Kinect. In this context, there is a strong need of powerful 3-D shape descriptors able to generate rich object representations. Although several 3-D descriptors have been already proposed in the literature, the research of discriminative and computationally efficient descriptors is still an open issue. In this paper, we propose a novel point cloud descriptor called spherical blurred shape model (SBSM) that successfully encodes the structure density and local variabilities of an object based on shape voxel distances and a neighborhood propagation strategy. The proposed SBSM is proven to be rotation and scale invariant, robust to noise and occlusions, highly discriminative for multiple categories of complex objects like the human hand, and computationally efficient since the SBSM complexity is linear to the number of object voxels. Experimental evaluation in public depth multiclass object data, 3-D facial expressions data, and a novel hand poses data sets show significant performance improvements in relation to state-of-the-art approaches. Moreover, the effectiveness of the proposal is also proved for object spotting in 3-D scenes and for real-time automatic hand pose recognition in human computer interaction scenarios.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 2168-2267 ISBN Medium
Area Expedition Conference
Notes HuPBA; ISE; 600.078;MILAB Approved no
Call Number Admin @ si @ LRE2014 Serial 2442
Permanent link to this record