|
Laura Igual, Agata Lapedriza, & Ricard Borras. (2013). Robust Gait-Based Gender Classification using Depth Cameras. EURASIPJ - EURASIP Journal on Advances in Signal Processing, 37(1), 72–80.
Abstract: This article presents a new approach for gait-based gender recognition using depth cameras, that can run in real time. The main contribution of this study is a new fast feature extraction strategy that uses the 3D point cloud obtained from the frames in a gait cycle. For each frame, these points are aligned according to their centroid and grouped. After that, they are projected into their PCA plane, obtaining a representation of the cycle particularly robust against view changes. Then, final discriminative features are computed by first making a histogram of the projected points and then using linear discriminant analysis. To test the method we have used the DGait database, which is currently the only publicly available database for gait analysis that includes depth information. We have performed experiments on manually labeled cycles and over whole video sequences, and the results show that our method improves the accuracy significantly, compared with state-of-the-art systems which do not use depth information. Furthermore, our approach is insensitive to illumination changes, given that it discards the RGB information. That makes the method especially suitable for real applications, as illustrated in the last part of the experiments section.
|
|
|
Matthias S. Keil, & Jordi Vitria. (2007). Pushing it to the Limit: Adaptation with Dynamically Switching Gain Control. EURASIP Journal on Advances in Signal Processing, Vol 2007, Article ID 51684, 10 pages, doi: 10.1155/2007/51684.
|
|
|
David Masip, M. Bressan, & Jordi Vitria. (2005). Feature extraction methods for real-time face detection and classification. Eurasip Journal on Applied Signal Processing, 13: 2061–2071.
|
|
|
A. Martinez, & Jordi Vitria. (2001). Clustering in Image Space for Place Recognition and Visiual Annotations for Human-Robot Interaction. IEEE Trans. on Systems, Man, and Cybernatics–Part B: Cybernetics, 31(5):669–682 (IF: 0.789).
|
|
|
Maria Elena Meza-de-Luna, Juan Ramon Terven Salinas, Bogdan Raducanu, & Joaquin Salas. (2016). Assessing the Influence of Mirroring on the Perception of Professional Competence using Wearable Technology. TAC - IEEE Transactions on Affective Computing, 9(2), 161–175.
Abstract: Nonverbal communication is an intrinsic part in daily face-to-face meetings. A frequently observed behavior during social interactions is mirroring, in which one person tends to mimic the attitude of the counterpart. This paper shows that a computer vision system could be used to predict the perception of competence in dyadic interactions through the automatic detection of mirroring
events. To prove our hypothesis, we developed: (1) A social assistant for mirroring detection, using a wearable device which includes a video camera and (2) an automatic classifier for the perception of competence, using the number of nodding gestures and mirroring events as predictors. For our study, we used a mixed-method approach in an experimental design where 48 participants acting as customers interacted with a confederated psychologist. We found that the number of nods or mirroring events has a significant influence on the perception of competence. Our results suggest that: (1) Customer mirroring is a better predictor than psychologist mirroring; (2) the number of psychologist’s nods is a better predictor than the number of customer’s nods; (3) except for the psychologist mirroring, the computer vision algorithm we used worked about equally well whether it was acquiring images from wearable smartglasses or fixed cameras.
Keywords: Mirroring; Nodding; Competence; Perception; Wearable Technology
|
|