PT Unknown AU Fatemeh Noroozi Marina Marjanovic Angelina Njegus Sergio Escalera Gholamreza Anbarjafari TI Fusion of Classifier Predictions for Audio-Visual Emotion Recognition BT 23rd International Conference on Pattern Recognition Workshops PY 2016 AB In this paper is presented a novel multimodal emotion recognition system which is based on the analysis of audio and visual cues. MFCC-based features are extracted from the audio channel and facial landmark geometric relations arecomputed from visual data. Both sets of features are learnt separately using state-of-the-art classifiers. In addition, we summarise each emotion video into a reduced set of key-frames, which are learnt in order to visually discriminate emotions by means of a Convolutional Neural Network. Finally, confidenceoutputs of all classifiers from all modalities are used to define a new feature space to be learnt for final emotion prediction, in a late fusion/stacking fashion. The conducted experiments on eNTERFACE’05 database show significant performance improvements of our proposed system in comparison to state-of-the-art approaches. ER