|
R. Clariso, David Masip, & A. Rius. (2014). Student projects empowering mobile learning in higher education. RUSC - Revista de Universidad y Sociedad del Conocimiento, 192–207.
|
|
|
Sergio Escalera, Xavier Baro, Jordi Vitria, Petia Radeva, & Bogdan Raducanu. (2012). Social Network Extraction and Analysis Based on Multimodal Dyadic Interaction. SENS - Sensors, 12(2), 1702–1719.
Abstract: IF=1.77 (2010)
Social interactions are a very important component in peopleís lives. Social network analysis has become a common technique used to model and quantify the properties of social interactions. In this paper, we propose an integrated framework to explore the characteristics of a social network extracted from multimodal dyadic interactions. For our study, we used a set of videos belonging to New York Timesí Blogging Heads opinion blog.
The Social Network is represented as an oriented graph, whose directed links are determined by the Influence Model. The linksí weights are a measure of the ìinfluenceî a person has over the other. The states of the Influence Model encode automatically extracted audio/visual features from our videos using state-of-the art algorithms. Our results are reported in terms of accuracy of audio/visual data fusion for speaker segmentation and centrality measures used to characterize the extracted social network.
|
|
|
J.M. Sanchez, X. Binefa, & Jordi Vitria. (2002). Shot Partitioning Based Recognition of Tv Commercials. Multimedia Tools and Applications, 18: 233–247, Kluwer Academic Publishers (IF: 0.421).
|
|
|
David Masip, & Jordi Vitria. (2008). Shared Feature Extraction for Nearest Neighbor Face Recognition. IEEE Transactions on Neural Networks, 586–595.
|
|
|
Laura Igual, Agata Lapedriza, & Ricard Borras. (2013). Robust Gait-Based Gender Classification using Depth Cameras. EURASIPJ - EURASIP Journal on Advances in Signal Processing, 37(1), 72–80.
Abstract: This article presents a new approach for gait-based gender recognition using depth cameras, that can run in real time. The main contribution of this study is a new fast feature extraction strategy that uses the 3D point cloud obtained from the frames in a gait cycle. For each frame, these points are aligned according to their centroid and grouped. After that, they are projected into their PCA plane, obtaining a representation of the cycle particularly robust against view changes. Then, final discriminative features are computed by first making a histogram of the projected points and then using linear discriminant analysis. To test the method we have used the DGait database, which is currently the only publicly available database for gait analysis that includes depth information. We have performed experiments on manually labeled cycles and over whole video sequences, and the results show that our method improves the accuracy significantly, compared with state-of-the-art systems which do not use depth information. Furthermore, our approach is insensitive to illumination changes, given that it discards the RGB information. That makes the method especially suitable for real applications, as illustrated in the last part of the experiments section.
|
|