|   | 
Details
   web
Records
Author Susana Alvarez; Anna Salvatella; Maria Vanrell; Xavier Otazu
Title Low-dimensional and Comprehensive Color Texture Description Type Journal Article
Year 2012 Publication Computer Vision and Image Understanding Abbreviated Journal CVIU
Volume 116 Issue I Pages 54-67
Keywords
Abstract Image retrieval can be dealt by combining standard descriptors, such as those of MPEG-7, which are defined independently for each visual cue (e.g. SCD or CLD for Color, HTD for texture or EHD for edges).
A common problem is to combine similarities coming from descriptors representing different concepts in different spaces. In this paper we propose a color texture description that bypasses this problem from its inherent definition. It is based on a low dimensional space with 6 perceptual axes. Texture is described in a 3D space derived from a direct implementation of the original Julesz’s Texton theory and color is described in a 3D perceptual space. This early fusion through the blob concept in these two bounded spaces avoids the problem and allows us to derive a sparse color-texture descriptor that achieves similar performance compared to MPEG-7 in image retrieval. Moreover, our descriptor presents comprehensive qualities since it can also be applied either in segmentation or browsing: (a) a dense image representation is defined from the descriptor showing a reasonable performance in locating texture patterns included in complex images; and (b) a vocabulary of basic terms is derived to build an intermediate level descriptor in natural language improving browsing by bridging semantic gap
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language (down) Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1077-3142 ISBN Medium
Area Expedition Conference
Notes CAT;CIC Approved no
Call Number Admin @ si @ ASV2012 Serial 1827
Permanent link to this record
 

 
Author Alicia Fornes; Josep Llados; Gemma Sanchez; Horst Bunke
Title Writer Identification in Old Handwritten Music Scores Type Book Chapter
Year 2012 Publication Pattern Recognition and Signal Processing in Archaeometry: Mathematical and Computational Solutions for Archaeology Abbreviated Journal
Volume Issue Pages 27-63
Keywords
Abstract The aim of writer identification is determining the writer of a piece of handwriting from a set of writers. In this paper we present a system for writer identification in old handwritten music scores. Even though an important amount of compositions contains handwritten text in the music scores, the aim of our work is to use only music notation to determine the author. The steps of the system proposed are the following. First of all, the music sheet is preprocessed and normalized for obtaining a single binarized music line, without the staff lines. Afterwards, 100 features are extracted for every music line, which are subsequently used in a k-NN classifier that compares every feature vector with prototypes stored in a database. By applying feature selection and extraction methods on the original feature set, the performance is increased. The proposed method has been tested on a database of old music scores from the 17th to 19th centuries, achieving a recognition rate of about 95%.
Address
Corporate Author Thesis
Publisher IGI-Global Place of Publication Editor Copnstantin Papaodysseus
Language Summary Language (down) Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes DAG Approved no
Call Number Admin @ si @ FLS2012 Serial 1828
Permanent link to this record
 

 
Author Carolina Malagelada; F.De Lorio; Santiago Segui; S. Mendez; Michal Drozdzal; Jordi Vitria; Petia Radeva; J.Santos; Anna Accarino; Juan R. Malagelada; Fernando Azpiroz
Title Functional gut disorders or disordered gut function? Small bowel dysmotility evidenced by an original technique Type Journal Article
Year 2012 Publication Neurogastroenterology & Motility Abbreviated Journal NEUMOT
Volume 24 Issue 3 Pages 223-230
Keywords capsule endoscopy;computer vision analysis;machine learning technique;small bowel motility
Abstract JCR Impact Factor 2010: 3.349
Background This study aimed to determine the proportion of cases with abnormal intestinal motility among patients with functional bowel disorders. To this end, we applied an original method, previously developed in our laboratory, for analysis of endoluminal images obtained by capsule endoscopy. This novel technology is based on computer vision and machine learning techniques.
 Methods The endoscopic capsule (Pillcam SB1; Given Imaging, Yokneam, Israel) was administered to 80 patients with functional bowel disorders and 70 healthy subjects. Endoluminal image analysis was performed with a computer vision program developed for the evaluation of contractile events (luminal occlusions and radial wrinkles), non-contractile patterns (open tunnel and smooth wall patterns), type of content (secretions, chyme) and motion of wall and contents. Normality range and discrimination of abnormal cases were established by a machine learning technique. Specifically, an iterative classifier (one-class support vector machine) was applied in a random population of 50 healthy subjects as a training set and the remaining subjects (20 healthy subjects and 80 patients) as a test set.
 Key Results The classifier identified as abnormal 29% of patients with functional diseases of the bowel (23 of 80), and as normal 97% of healthy subjects (68 of 70) (P < 0.05 by chi-squared test). Patients identified as abnormal clustered in two groups, which exhibited either a hyper- or a hypodynamic motility pattern. The motor behavior was unrelated to clinical features.
Conclusions &  Inferences With appropriate methodology, abnormal intestinal motility can be demonstrated in a significant proportion of patients with functional bowel disorders, implying a pathologic disturbance of gut physiology.
Address
Corporate Author Thesis
Publisher Wiley Online Library Place of Publication Editor
Language Summary Language (down) Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB; OR; MV Approved no
Call Number Admin @ si @ MLS2012 Serial 1830
Permanent link to this record
 

 
Author Michal Drozdzal; Petia Radeva; Santiago Segui; Laura Igual; Carolina Malagelada; Fernando Azpiroz; Jordi Vitria
Title System and Method for Improving a Discriminative Model Type Patent
Year 2012 Publication US 61/450,886 Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address Given Imaging
Corporate Author US Patent Office Thesis
Publisher Place of Publication Editor
Language Summary Language (down) Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB; OR;MV Approved no
Call Number Admin @ si @ DRS2012a Serial 1896
Permanent link to this record
 

 
Author Antonio Hernandez; Nadezhda Zlateva; Alexander Marinov; Miguel Reyes; Petia Radeva; Dimo Dimov; Sergio Escalera
Title Graph Cuts Optimization for Multi-Limb Human Segmentation in Depth Maps Type Conference Article
Year 2012 Publication 25th IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal
Volume Issue Pages 726-732
Keywords
Abstract We present a generic framework for object segmentation using depth maps based on Random Forest and Graph-cuts theory, and apply it to the segmentation of human limbs in depth maps. First, from a set of random depth features, Random Forest is used to infer a set of label probabilities for each data sample. This vector of probabilities is used as unary term in α-β swap Graph-cuts algorithm. Moreover, depth of spatio-temporal neighboring data points are used as boundary potentials. Results on a new multi-label human depth data set show high performance in terms of segmentation overlapping of the novel methodology compared to classical approaches.
Address Portland; Oregon; June 2013
Corporate Author Thesis
Publisher IEEE Xplore Place of Publication Editor
Language Summary Language (down) Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1063-6919 ISBN 978-1-4673-1226-4 Medium
Area Expedition Conference CVPR
Notes MILAB;HuPBA Approved no
Call Number Admin @ si @ HZM2012b Serial 2046
Permanent link to this record
 

 
Author Naveen Onkarappa; Sujay M. Veerabhadrappa; Angel Sappa
Title Optical Flow in Onboard Applications: A Study on the Relationship Between Accuracy and Scene Texture Type Conference Article
Year 2012 Publication 4th International Conference on Signal and Image Processing Abbreviated Journal
Volume 221 Issue Pages 257-267
Keywords
Abstract Optical flow has got a major role in making advanced driver assistance systems (ADAS) a reality. ADAS applications are expected to perform efficiently in all kinds of environments, those are highly probable, that one can drive the vehicle in different kinds of roads, times and seasons. In this work, we study the relationship of optical flow with different roads, that is by analyzing optical flow accuracy on different road textures. Texture measures such as TeX , TeX and TeX are evaluated for this purpose. Further, the relation of regularization weight to the flow accuracy in the presence of different textures is also analyzed. Additionally, we present a framework to generate synthetic sequences of different textures in ADAS scenarios with ground-truth optical flow.
Address Coimbatore, India
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language (down) Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1876-1100 ISBN 978-81-322-0996-6 Medium
Area Expedition Conference ICSIP
Notes ADAS Approved no
Call Number Admin @ si @ OVS2012 Serial 2356
Permanent link to this record
 

 
Author Mohammad Rouhani; Angel Sappa
Title Implicit Polynomial Representation through a Fast Fitting Error Estimation Type Journal Article
Year 2012 Publication IEEE Transactions on Image Processing Abbreviated Journal TIP
Volume 21 Issue 4 Pages 2089-2098
Keywords
Abstract Impact Factor
This paper presents a simple distance estimation for implicit polynomial fitting. It is computed as the height of a simplex built between the point and the surface (i.e., a triangle in 2-D or a tetrahedron in 3-D), which is used as a coarse but reliable estimation of the orthogonal distance. The proposed distance can be described as a function of the coefficients of the implicit polynomial. Moreover, it is differentiable and has a smooth behavior . Hence, it can be used in any gradient-based optimization. In this paper, its use in a Levenberg-Marquardt framework is shown, which is particularly devoted for nonlinear least squares problems. The proposed estimation is a generalization of the gradient-based distance estimation, which is widely used in the literature. Experimental results, both in 2-D and 3-D data sets, are provided. Comparisons with state-of-the-art techniques are presented, showing the advantages of the proposed approach.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language (down) Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1057-7149 ISBN Medium
Area Expedition Conference
Notes ADAS Approved no
Call Number Admin @ si @ RoS2012b; ADAS @ adas @ Serial 1937
Permanent link to this record
 

 
Author J. Stöttinger; A. Hanbury; N. Sebe; Theo Gevers
Title Spars Color Interest Points for Image Retrieval and Object Categorization Type Journal Article
Year 2012 Publication IEEE Transactions on Image Processing Abbreviated Journal TIP
Volume 21 Issue 5 Pages 2681-2692
Keywords
Abstract Impact factor 2010: 2.92
IF 2011/2012?: 3.32
Interest point detection is an important research area in the field of image processing and computer vision. In particular, image retrieval and object categorization heavily rely on interest point detection from which local image descriptors are computed for image matching. In general, interest points are based on luminance, and color has been largely ignored. However, the use of color increases the distinctiveness of interest points. The use of color may therefore provide selective search reducing the total number of interest points used for image matching. This paper proposes color interest points for sparse image representation. To reduce the sensitivity to varying imaging conditions, light-invariant interest points are introduced. Color statistics based on occurrence probability lead to color boosted points, which are obtained through saliency-based feature selection. Furthermore, a principal component analysis-based scale selection method is proposed, which gives a robust scale estimation per interest point. From large-scale experiments, it is shown that the proposed color interest point detector has higher repeatability than a luminance-based one. Furthermore, in the context of image retrieval, a reduced and predictable number of color features show an increase in performance compared to state-of-the-art interest points. Finally, in the context of object recognition, for the Pascal VOC 2007 challenge, our method gives comparable performance to state-of-the-art methods using only a small fraction of the features, reducing the computing time considerably.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language (down) Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1057-7149 ISBN Medium
Area Expedition Conference
Notes ALTRES;ISE Approved no
Call Number Admin @ si @ SHS2012 Serial 1847
Permanent link to this record
 

 
Author R. Valenti; N. Sebe; Theo Gevers
Title What are you looking at? Improving Visual gaze Estimation by Saliency Type Journal Article
Year 2012 Publication International Journal of Computer Vision Abbreviated Journal IJCV
Volume 98 Issue 3 Pages 324-334
Keywords
Abstract Impact factor 2010: 5.15
Impact factor 2011/12?: 5.36
In this paper we present a novel mechanism to obtain enhanced gaze estimation for subjects looking at a scene or an image. The system makes use of prior knowledge about the scene (e.g. an image on a computer screen), to define a probability map of the scene the subject is gazing at, in order to find the most probable location. The proposed system helps in correcting the fixations which are erroneously estimated by the gaze estimation device by employing a saliency framework to adjust the resulting gaze point vector. The system is tested on three scenarios: using eye tracking data, enhancing a low accuracy webcam based eye tracker, and using a head pose tracker. The correlation between the subjects in the commercial eye tracking data is improved by an average of 13.91%. The correlation on the low accuracy eye gaze tracker is improved by 59.85%, and for the head pose tracker we obtain an improvement of 10.23%. These results show the potential of the system as a way to enhance and self-calibrate different visual gaze estimation systems.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language (down) Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0920-5691 ISBN Medium
Area Expedition Conference
Notes ALTRES;ISE Approved no
Call Number Admin @ si @ VSG2012 Serial 1848
Permanent link to this record
 

 
Author R. Valenti; Theo Gevers
Title Accurate Eye Center Location through Invariant Isocentric Patterns Type Journal Article
Year 2012 Publication IEEE Transaction on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI
Volume 34 Issue 9 Pages 1785-1798
Keywords
Abstract Impact factor 2010: 5.308
Impact factor 2011/12?: 5.96
Locating the center of the eyes allows for valuable information to be captured and used in a wide range of applications. Accurate eye center location can be determined using commercial eye-gaze trackers, but additional constraints and expensive hardware make these existing solutions unattractive and impossible to use on standard (i.e., visible wavelength), low-resolution images of eyes. Systems based solely on appearance are proposed in the literature, but their accuracy does not allow us to accurately locate and distinguish eye centers movements in these low-resolution settings. Our aim is to bridge this gap by locating the center of the eye within the area of the pupil on low-resolution images taken from a webcam or a similar device. The proposed method makes use of isophote properties to gain invariance to linear lighting changes (contrast and brightness), to achieve in-plane rotational invariance, and to keep low-computational costs. To further gain scale invariance, the approach is applied to a scale space pyramid. In this paper, we extensively test our approach for its robustness to changes in illumination, head pose, scale, occlusion, and eye rotation. We demonstrate that our system can achieve a significant improvement in accuracy over state-of-the-art techniques for eye center location in standard low-resolution imagery.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language (down) Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0162-8828 ISBN Medium
Area Expedition Conference
Notes ALTRES;ISE Approved no
Call Number Admin @ si @ VaG 2012a Serial 1849
Permanent link to this record
 

 
Author Arjan Gijsenij; Theo Gevers; Joost Van de Weijer
Title Improving Color Constancy by Photometric Edge Weighting Type Journal Article
Year 2012 Publication IEEE Transaction on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI
Volume 34 Issue 5 Pages 918-929
Keywords
Abstract : Edge-based color constancy methods make use of image derivatives to estimate the illuminant. However, different edge types exist in real-world images such as material, shadow and highlight edges. These different edge types may have a distinctive influence on the performance of the illuminant estimation. Therefore, in this paper, an extensive analysis is provided of different edge types on the performance of edge-based color constancy methods. First, an edge-based taxonomy is presented classifying edge types based on their photometric properties (e.g. material, shadow-geometry and highlights). Then, a performance evaluation of edge-based color constancy is provided using these different edge types. From this performance evaluation it is derived that specular and shadow edge types are more valuable than material edges for the estimation of the illuminant. To this end, the (iterative) weighted Grey-Edge algorithm is proposed in which these edge types are more emphasized for the estimation of the illuminant. Images that are recorded under controlled circumstances demonstrate that the proposed iterative weighted Grey-Edge algorithm based on highlights reduces the median angular error with approximately $25\%$. In an uncontrolled environment, improvements in angular error up to $11\%$ are obtained with respect to regular edge-based color constancy.
Address Los Alamitos; CA; USA;
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language (down) Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0162-8828 ISBN Medium
Area Expedition Conference
Notes CIC;ISE Approved no
Call Number Admin @ si @ GGW2012 Serial 1850
Permanent link to this record
 

 
Author R. Valenti; Theo Gevers
Title Combining Head Pose and Eye Location Information for Gaze Estimation Type Journal Article
Year 2012 Publication IEEE Transactions on Image Processing Abbreviated Journal TIP
Volume 21 Issue 2 Pages 802-815
Keywords
Abstract Impact factor 2010: 2.92
Impact factor 2011/12?: 3.32
Head pose and eye location for gaze estimation have been separately studied in numerous works in the literature. Previous research shows that satisfactory accuracy in head pose and eye location estimation can be achieved in constrained settings. However, in the presence of nonfrontal faces, eye locators are not adequate to accurately locate the center of the eyes. On the other hand, head pose estimation techniques are able to deal with these conditions; hence, they may be suited to enhance the accuracy of eye localization. Therefore, in this paper, a hybrid scheme is proposed to combine head pose and eye location information to obtain enhanced gaze estimation. To this end, the transformation matrix obtained from the head pose is used to normalize the eye regions, and in turn, the transformation matrix generated by the found eye location is used to correct the pose estimation procedure. The scheme is designed to enhance the accuracy of eye location estimations, particularly in low-resolution videos, to extend the operative range of the eye locators, and to improve the accuracy of the head pose tracker. These enhanced estimations are then combined to obtain a novel visual gaze estimation system, which uses both eye location and head information to refine the gaze estimates. From the experimental results, it can be derived that the proposed unified scheme improves the accuracy of eye estimations by 16% to 23%. Furthermore, it considerably extends its operating range by more than 15° by overcoming the problems introduced by extreme head poses. Moreover, the accuracy of the head pose tracker is improved by 12% to 24%. Finally, the experimentation on the proposed combined gaze estimation system shows that it is accurate (with a mean error between 2° and 5°) and that it can be used in cases where classic approaches would fail without imposing restraints on the position of the head.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language (down) Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1057-7149 ISBN Medium
Area Expedition Conference
Notes ALTRES;ISE Approved no
Call Number Admin @ si @ VaG 2012b Serial 1851
Permanent link to this record
 

 
Author Arjan Gijsenij; R. Lu; Theo Gevers; De Xu
Title Color Constancy for Multiple Light Source Type Journal Article
Year 2012 Publication IEEE Transactions on Image Processing Abbreviated Journal TIP
Volume 21 Issue 2 Pages 697-707
Keywords
Abstract Impact factor 2010: 2.92
Impact factor 2011/2012?: 3.32
Color constancy algorithms are generally based on the simplifying assumption that the spectral distribution of a light source is uniform across scenes. However, in reality, this assumption is often violated due to the presence of multiple light sources. In this paper, we will address more realistic scenarios where the uniform light-source assumption is too restrictive. First, a methodology is proposed to extend existing algorithms by applying color constancy locally to image patches, rather than globally to the entire image. After local (patch-based) illuminant estimation, these estimates are combined into more robust estimations, and a local correction is applied based on a modified diagonal model. Quantitative and qualitative experiments on spectral and real images show that the proposed methodology reduces the influence of two light sources simultaneously present in one scene. If the chromatic difference between these two illuminants is more than 1° , the proposed framework outperforms algorithms based on the uniform light-source assumption (with error-reduction up to approximately 30%). Otherwise, when the chromatic difference is less than 1° and the scene can be considered to contain one (approximately) uniform light source, the performance of the proposed method framework is similar to global color constancy methods.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language (down) Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1057-7149 ISBN Medium
Area Expedition Conference
Notes ALTRES;ISE Approved no
Call Number Admin @ si @ GLG2012a Serial 1852
Permanent link to this record
 

 
Author Hamdi Dibeklioglu; Albert Ali Salah; Theo Gevers
Title A Statistical Method for 2D Facial Landmarking Type Journal Article
Year 2012 Publication IEEE Transactions on Image Processing Abbreviated Journal TIP
Volume 21 Issue 2 Pages 844-858
Keywords
Abstract IF = 3.32
Many facial-analysis approaches rely on robust and accurate automatic facial landmarking to correctly function. In this paper, we describe a statistical method for automatic facial-landmark localization. Our landmarking relies on a parsimonious mixture model of Gabor wavelet features, computed in coarse-to-fine fashion and complemented with a shape prior. We assess the accuracy and the robustness of the proposed approach in extensive cross-database conditions conducted on four face data sets (Face Recognition Grand Challenge, Cohn-Kanade, Bosphorus, and BioID). Our method has 99.33% accuracy on the Bosphorus database and 97.62% accuracy on the BioID database on the average, which improves the state of the art. We show that the method is not significantly affected by low-resolution images, small rotations, facial expressions, and natural occlusions such as beard and mustache. We further test the goodness of the landmarks in a facial expression recognition application and report landmarking-induced improvement over baseline on two separate databases for video-based expression recognition (Cohn-Kanade and BU-4DFE).
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language (down) Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1057-7149 ISBN Medium
Area Expedition Conference
Notes ALTRES;ISE Approved no
Call Number Admin @ si @ DSG 2012 Serial 1853
Permanent link to this record
 

 
Author Francesc Tanarro Marquez; Pau Gratacos Marti; F. Javier Sanchez; Joan Ramon Jimenez Minguell; Coen Antens; Enric Sala i Esteva
Title A device for monitoring condition of a railway supply Type Patent
Year 2012 Publication EP 2 404 777 A1 Abbreviated Journal
Volume Issue Pages
Keywords
Abstract of a railway supply line when the supply line is in contact with a head of a pantograph of a vehicle in order to power said vehicle . The device includes a camera ( for monitoring parameters indicative of operating capability of said supply line.
The device is intended to monitor condition
tive of operating capability of said supply line. The device includes a reflective element. comprising a pattern , intended to be arranged onto the pantograph head . The camera is intended to be arranged on the vehicle (10) so as to register the pattern position regarding a vertical direction.
Address
Corporate Author ALSTOM Transport SA Thesis
Publisher European Patent Office Place of Publication Editor
Language Summary Language (down) Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MV Approved no
Call Number IAM @ iam @ MMS2012 Serial 1854
Permanent link to this record