|
B. Zhou, Agata Lapedriza, J. Xiao, A. Torralba, & A. Oliva. (2014). Learning Deep Features for Scene Recognition using Places Database. In 28th Annual Conference on Neural Information Processing Systems (pp. 487–495).
|
|
|
Agata Lapedriza, David Masip, & D.Sanchez. (2014). Emotions Classification using Facial Action Units Recognition. In 17th International Conference of the Catalan Association for Artificial Intelligence (Vol. 269, pp. 55–64).
Abstract: In this work we build a system for automatic emotion classification from image sequences. We analyze subtle changes in facial expressions by detecting a subset of 12 representative facial action units (AUs). Then, we classify emotions based on the output of these AUs classifiers, i.e. the presence/absence of AUs. We base the AUs classification upon a set of spatio-temporal geometric and appearance features for facial representation, fusing them within the emotion classifier. A decision tree is trained for emotion classifying, making the resulting model easy to interpret by capturing the combination of AUs activation that lead to a particular emotion. For Cohn-Kanade database, the proposed system classifies 7 emotions with a mean accuracy of near 90%, attaining a similar recognition accuracy in comparison with non-interpretable models that are not based in AUs detection.
|
|
|
Fadi Dornaika, Bogdan Raducanu, & Alireza Bosaghzadeh. (2015). Facial expression recognition based on multi observations with application to social robotics. In Bruce Flores (Ed.), Emotional and Facial Expressions: Recognition, Developmental Differences and Social Importance (pp. 153–166). Nova Science publishers.
Abstract: Human-robot interaction is a hot topic nowadays in the social robotics
community. One crucial aspect is represented by the affective communication
which comes encoded through the facial expressions. In this chapter, we propose a novel approach for facial expression recognition, which exploits an efficient and adaptive graph-based label propagation (semi-supervised mode) in a multi-observation framework. The facial features are extracted using an appearance-based 3D face tracker, viewand texture independent. Our method has been extensively tested on the CMU dataset, and has been conveniently compared with other methods for graph construction. With the proposed approach, we developed an application for an AIBO robot, in which it mirrors the recognized facial
expression.
|
|
|
David Sanchez-Mendoza, David Masip, & Agata Lapedriza. (2015). Emotion recognition from mid-level features. PRL - Pattern Recognition Letters, 67(Part 1), 66–74.
Abstract: In this paper we present a study on the use of Action Units as mid-level features for automatically recognizing basic and subtle emotions. We propose a representation model based on mid-level facial muscular movement features. We encode these movements dynamically using the Facial Action Coding System, and propose to use these intermediate features based on Action Units (AUs) to classify emotions. AUs activations are detected fusing a set of spatiotemporal geometric and appearance features. The algorithm is validated in two applications: (i) the recognition of 7 basic emotions using the publicly available Cohn-Kanade database, and (ii) the inference of subtle emotional cues in the Newscast database. In this second scenario, we consider emotions that are perceived cumulatively in longer periods of time. In particular, we Automatically classify whether video shoots from public News TV channels refer to Good or Bad news. To deal with the different video lengths we propose a Histogram of Action Units and compute it using a sliding window strategy on the frame sequences. Our approach achieves accuracies close to human perception.
Keywords: Facial expression; Emotion recognition; Action units; Computer vision
|
|
|
Petia Radeva, J. Martinez, A. Tovar, X. Binefa, Jordi Vitria, & Juan J. Villanueva. (1999). CORKIDENT: an automatic vision system for real-time inspection of natural products.
|
|
|
J.M. Sanchez, X. Binefa, Jordi Vitria, & Petia Radeva. (1999). Local Analysis for Scene Break Detection Applied to TV Commercials Recognition..
|
|
|
Jordi Vitria, Petia Radeva, & X. Binefa. (1999). EigenHistograms: using low dimensional models of color distribution for real time object recognition.
|
|
|
Petia Radeva, & Jordi Vitria. (2001). Region Based Approach for Discriminant Snakes..
|
|
|
F. de la Torre, Jordi Vitria, Petia Radeva, & J. Melenchon. (2000). EigenFiltering for flexible Eigentracking..
|
|
|
Petia Radeva, M. Bressan, A. Tovar, & Jordi Vitria. (2002). Bayesian Classification for Inspection of Industrial Products..
|
|
|
Petia Radeva, M. Bressan, A. Tovar, & Jordi Vitria. (2002). Real-Time Inspection of cork stoppers using parametric methods in high dimensional spaces..
|
|
|
Petia Radeva, M. Bressan, A. Tovar, & Jordi Vitria. (2002). Bayesian Classification for Inspection of Industrial Products..
|
|
|
X. Binefa, J.M. Sanchez, Petia Radeva, & Jordi Vitria. (2000). Linking Visual Cues and Semantic Terms Under Specific Digital Video Domains..
|
|
|
Petia Radeva, & Jordi Vitria. (2003). “Inteligencia artificial” Centre de Visio per Computador.
|
|
|
Petia Radeva, & Jordi Vitria. (2001). Region-Based Approach for Discriminant Snakes.
|
|