|
R. Valenti, & Theo Gevers. (2012). Combining Head Pose and Eye Location Information for Gaze Estimation. TIP - IEEE Transactions on Image Processing, 21(2), 802–815.
Abstract: Impact factor 2010: 2.92
Impact factor 2011/12?: 3.32
Head pose and eye location for gaze estimation have been separately studied in numerous works in the literature. Previous research shows that satisfactory accuracy in head pose and eye location estimation can be achieved in constrained settings. However, in the presence of nonfrontal faces, eye locators are not adequate to accurately locate the center of the eyes. On the other hand, head pose estimation techniques are able to deal with these conditions; hence, they may be suited to enhance the accuracy of eye localization. Therefore, in this paper, a hybrid scheme is proposed to combine head pose and eye location information to obtain enhanced gaze estimation. To this end, the transformation matrix obtained from the head pose is used to normalize the eye regions, and in turn, the transformation matrix generated by the found eye location is used to correct the pose estimation procedure. The scheme is designed to enhance the accuracy of eye location estimations, particularly in low-resolution videos, to extend the operative range of the eye locators, and to improve the accuracy of the head pose tracker. These enhanced estimations are then combined to obtain a novel visual gaze estimation system, which uses both eye location and head information to refine the gaze estimates. From the experimental results, it can be derived that the proposed unified scheme improves the accuracy of eye estimations by 16% to 23%. Furthermore, it considerably extends its operating range by more than 15° by overcoming the problems introduced by extreme head poses. Moreover, the accuracy of the head pose tracker is improved by 12% to 24%. Finally, the experimentation on the proposed combined gaze estimation system shows that it is accurate (with a mean error between 2° and 5°) and that it can be used in cases where classic approaches would fail without imposing restraints on the position of the head.
|
|
|
Arjan Gijsenij, R. Lu, Theo Gevers, & De Xu. (2012). Color Constancy for Multiple Light Source. TIP - IEEE Transactions on Image Processing, 21(2), 697–707.
Abstract: Impact factor 2010: 2.92
Impact factor 2011/2012?: 3.32
Color constancy algorithms are generally based on the simplifying assumption that the spectral distribution of a light source is uniform across scenes. However, in reality, this assumption is often violated due to the presence of multiple light sources. In this paper, we will address more realistic scenarios where the uniform light-source assumption is too restrictive. First, a methodology is proposed to extend existing algorithms by applying color constancy locally to image patches, rather than globally to the entire image. After local (patch-based) illuminant estimation, these estimates are combined into more robust estimations, and a local correction is applied based on a modified diagonal model. Quantitative and qualitative experiments on spectral and real images show that the proposed methodology reduces the influence of two light sources simultaneously present in one scene. If the chromatic difference between these two illuminants is more than 1° , the proposed framework outperforms algorithms based on the uniform light-source assumption (with error-reduction up to approximately 30%). Otherwise, when the chromatic difference is less than 1° and the scene can be considered to contain one (approximately) uniform light source, the performance of the proposed method framework is similar to global color constancy methods.
|
|
|
Hamdi Dibeklioglu, Albert Ali Salah, & Theo Gevers. (2012). A Statistical Method for 2D Facial Landmarking. TIP - IEEE Transactions on Image Processing, 21(2), 844–858.
Abstract: IF = 3.32
Many facial-analysis approaches rely on robust and accurate automatic facial landmarking to correctly function. In this paper, we describe a statistical method for automatic facial-landmark localization. Our landmarking relies on a parsimonious mixture model of Gabor wavelet features, computed in coarse-to-fine fashion and complemented with a shape prior. We assess the accuracy and the robustness of the proposed approach in extensive cross-database conditions conducted on four face data sets (Face Recognition Grand Challenge, Cohn-Kanade, Bosphorus, and BioID). Our method has 99.33% accuracy on the Bosphorus database and 97.62% accuracy on the BioID database on the average, which improves the state of the art. We show that the method is not significantly affected by low-resolution images, small rotations, facial expressions, and natural occlusions such as beard and mustache. We further test the goodness of the landmarks in a facial expression recognition application and report landmarking-induced improvement over baseline on two separate databases for video-based expression recognition (Cohn-Kanade and BU-4DFE).
|
|
|
Ivo Everts, Jan van Gemert, & Theo Gevers. (2012). Per-patch Descriptor Selection using Surface and Scene Properties. In 12th European Conference on Computer Vision (Vol. 7577, pp. 172–186). LNCS. Springer Berlin Heidelberg.
Abstract: Local image descriptors are generally designed for describing all possible image patches. Such patches may be subject to complex variations in appearance due to incidental object, scene and recording conditions. Because of this, a single-best descriptor for accurate image representation under all conditions does not exist. Therefore, we propose to automatically select from a pool of descriptors the one that is best suitable based on object surface and scene properties. These properties are measured on the fly from a single image patch through a set of attributes. Attributes are input to a classifier which selects the best descriptor. Our experiments on a large dataset of colored object patches show that the proposed selection method outperforms the best single descriptor and a-priori combinations of the descriptor pool.
|
|
|
Hamdi Dibeklioglu, Theo Gevers, & Albert Ali Salah. (2012). Are You Really Smiling at Me? Spontaneous versus Posed Enjoyment Smiles. In 12th European Conference on Computer Vision (Vol. 7574, pp. 525–538). LNCS. Springer Berlin Heidelberg.
Abstract: Smiling is an indispensable element of nonverbal social interaction. Besides, automatic distinction between spontaneous and posed expressions is important for visual analysis of social signals. Therefore, in this paper, we propose a method to distinguish between spontaneous and posed enjoyment smiles by using the dynamics of eyelid, cheek, and lip corner movements. The discriminative power of these movements, and the effect of different fusion levels are investigated on multiple databases. Our results improve the state-of-the-art. We also introduce the largest spontaneous/posed enjoyment smile database collected to date, and report new empirical and conceptual findings on smile dynamics. The collected database consists of 1240 samples of 400 subjects. Moreover, it has the unique property of having an age range from 8 to 76 years. Large scale experiments on the new database indicate that eyelid dynamics are highly relevant for smile classification, and there are age-related differences in smile dynamics.
|
|
|
Theo Gevers, Arjan Gijsenij, Joost Van de Weijer, & J.M. Geusebroek. (2012). Color in Computer Vision: Fundamentals and Applications. The Wiley-IS&T Series in Imaging Science and Technology.
|
|
|
Daniel Ponsa, & Jordi Vitria. (1999). Mobile monitoring system using an agent-oriented approach.
|
|
|
A. Pujol, Jordi Vitria, Felipe Lumbreras, & Juan J. Villanueva. (2001). Topological principal component analysis for face encoding and recognition. PRL - Pattern Recognition Letters, 22(6-7), 769–776.
|
|
|
A. Dupuy, Joan Serrat, Jordi Vitria, & J. Pladellorens. (1991). Analysis of gammagraphic images by mathematical morphology. In Pattern Recognition and image Analysis: IV Spanish Symposium of Pattern Recognition and image Analysis, World Scientific Pub..
|
|
|
Joan Serrat, Jordi Vitria, & J. Pladellorens. (1991). Morphological Segmentation of Heart Scintigraphic image Sequences. In Computer Assisted Radiology..
|
|
|
Antonio Lopez, Felipe Lumbreras, A. Martinez, Joan Serrat, Xavier Roca, X. Varona, et al. (1997). Aplicaciones de la vision por computador a la industria..
|
|
|
A.F. Sole, Antonio Lopez, Cristina Cañero, Petia Radeva, & J. Saludes. (1999). Crease enhancement diffusion.
|
|
|
Petia Radeva, A.F. Sole, Antonio Lopez, & Joan Serrat. (1998). Detecting Nets of Linear Structures in Satellite Images..
|
|
|
Petia Radeva, A.F. Sole, Antonio Lopez, & Joan Serrat. (1999). Detecting Nets of Linear Structures in Satellite Images..
|
|
|
Petia Radeva, & Joan Serrat. (1993). Rubber Snake: Implementation on Signed Distance Potential. In Vision Conference (pp. 187–194).
|
|