|
Agata Lapedriza, Santiago Segui, David Masip, & Jordi Vitria. (2008). A Sparse Bayesian Approach for Joint Feature Selection and Classifier Learning. Pattern Analysis and Applications, Special Issue: Non–Parametric Distance–Based Classification Techniques and Their Applications,, 299–308.
|
|
|
Bogdan Raducanu, & Jordi Vitria. (2008). Online Nonparametric Discriminant Analysis for Incremental Subspace Learning and Recognition. Pattern Analysis and Applications. Special Issue: Non–Parametric Distance–Based Classification Techniques and Their Applications, 259–268.
|
|
|
F. Pla, Petia Radeva, & Jordi Vitria. (2008). Non-parametric distance-based classification techniques and their applications. Pattern Analysis and Applications, Special Issue: Non–Parametric Distance–Based Classification Techniques and Their Applications, 223–225.
|
|
|
R. Clariso, David Masip, & A. Rius. (2014). Student projects empowering mobile learning in higher education. RUSC - Revista de Universidad y Sociedad del Conocimiento, 192–207.
|
|
|
Sergio Escalera, Xavier Baro, Jordi Vitria, Petia Radeva, & Bogdan Raducanu. (2012). Social Network Extraction and Analysis Based on Multimodal Dyadic Interaction. SENS - Sensors, 12(2), 1702–1719.
Abstract: IF=1.77 (2010)
Social interactions are a very important component in peopleís lives. Social network analysis has become a common technique used to model and quantify the properties of social interactions. In this paper, we propose an integrated framework to explore the characteristics of a social network extracted from multimodal dyadic interactions. For our study, we used a set of videos belonging to New York Timesí Blogging Heads opinion blog.
The Social Network is represented as an oriented graph, whose directed links are determined by the Influence Model. The linksí weights are a measure of the ìinfluenceî a person has over the other. The states of the Influence Model encode automatically extracted audio/visual features from our videos using state-of-the art algorithms. Our results are reported in terms of accuracy of audio/visual data fusion for speaker segmentation and centrality measures used to characterize the extracted social network.
|
|