PT Unknown AU Sergio Escalera Petia Radeva Jordi Vitria Xavier Baro Bogdan Raducanu TI Modelling and Analyzing Multimodal Dyadic Interactions Using Social Networks BT 12th International Conference on Multimodal Interfaces and 7th Workshop on Machine Learning for Multimodal Interaction. PY 2010 DI 10.1145/1891903.1891967 DE Social interaction; Multimodal fusion; Influence model; Social network analysis AB Social network analysis became a common technique used to model and quantify the properties of social interactions. In this paper, we propose an integrated framework to explore the characteristics of a social network extracted frommultimodal dyadic interactions. First, speech detection is performed through an audio/visual fusion scheme based on stacked sequential learning. In the audio domain, speech is detected through clusterization of audio features. Clustersare modelled by means of an One-state Hidden Markov Model containing a diagonal covariance Gaussian Mixture Model. In the visual domain, speech detection is performed through differential-based feature extraction from the segmentedmouth region, and a dynamic programming matching procedure. Second, in order to model the dyadic interactions, we employed the Influence Model whose statesencode the previous integrated audio/visual data. Third, the social network is extracted based on the estimated influences. For our study, we used a set of videos belonging to New York Times’ Blogging Heads opinion blog. The resultsare reported both in terms of accuracy of the audio/visual data fusion and centrality measures used to characterize the social network. ER