|
Maria Vanrell, Jordi Vitria, & Xavier Roca. (1997). A multidimensional scaling approach to explore the behavior of a texture perception algorithm. Machine Vision and Applications, 9, 262–271.
|
|
|
Bogdan Raducanu, & Fadi Dornaika. (2013). Texture-independent recognition of facial expressions in image snapshots and videos. MVA - Machine Vision and Applications, 24(4), 811–820.
Abstract: This paper addresses the static and dynamic recognition of basic facial expressions. It has two main contributions. First, we introduce a view- and texture-independent scheme that exploits facial action parameters estimated by an appearance-based 3D face tracker. We represent the learned facial actions associated with different facial expressions by time series. Second, we compare this dynamic scheme with a static one based on analyzing individual snapshots and show that the former performs better than the latter. We provide evaluations of performance using three subspace learning techniques: linear discriminant analysis, non-parametric discriminant analysis and support vector machines.
|
|
|
Juan Ramon Terven Salinas, Joaquin Salas, & Bogdan Raducanu. (2013). Estado del Arte en Sistemas de Vision Artificial para Personas Invidentes. KS - Komputer Sapiens, 20–25.
|
|
|
F. Moreso, D. Seron, Jordi Vitria, J.M. Grinyo, F.M. Colome-Serra, N. Pares, et al. (1994). Quantification of Interstitial Chronic Renal Damage by means of Texture Analysis. Kidney International, 46(6), 1721–1727.
|
|
|
Agata Lapedriza, David Masip, & Jordi Vitria. (2006). On the Use of External Face Features for Identity Verification. Journal of Multimedia, 1(4): 11–20, 11–20.
Abstract: In general automatic face classification applications images are captured in natural environments. In these cases, the performance is affected by variations in facial images related to illumination, pose, occlusion or expressions. Most of the existing face classification systems use only the internal features information, composed by eyes, nose and mouth, since they are more difficult to imitate. Nevertheless, nowadays a lot of applications not related to security are developed, and in these cases the information located at head, chin or ears zones (external features) can be useful to improve the current accuracies. However, the lack of a natural alignment in these areas makes difficult to extract these features applying classic Bottom-Up methods. In this paper, we propose a complete scheme based on a Top-Down reconstruction algorithm to extract external features of face images. To test our system we have performed face verification experiments using public databases, given that identity verification is a general task that has many real life applications. We have considered images uniformly illuminated, images with occlusions and images with high local changes in the illumination, and the obtained results show that the information contributed by the external features can be useful for verification purposes, specially significant when faces are partially occluded.
Keywords: Face Verification, Computer Vision, Machine Learning
|
|