TY - JOUR AU - Fatemeh Noroozi AU - Marina Marjanovic AU - Angelina Njegus AU - Sergio Escalera AU - Gholamreza Anbarjafari PY - 2019// TI - Audio-Visual Emotion Recognition in Video Clips T2 - TAC JO - IEEE Transactions on Affective Computing SP - 60 EP - 75 VL - 10 IS - 1 N2 - This paper presents a multimodal emotion recognition system, which is based on the analysis of audio and visual cues. From the audio channel, Mel-Frequency Cepstral Coefficients, Filter Bank Energies and prosodic features are extracted. For the visual part, two strategies are considered. First, facial landmarks’ geometric relations, i.e. distances and angles, are computed. Second, we summarize each emotional video into a reduced set of key-frames, which are taught to visually discriminate between the emotions. In order to do so, a convolutional neural network is applied to key-frames summarizing videos. Finally, confidence outputs of all the classifiers from all the modalities are used to define a new feature space to be learned for final emotion label prediction, in a late fusion/stacking fashion. The experiments conducted on the SAVEE, eNTERFACE’05, and RML databases show significant performance improvements by our proposed system in comparison to current alternatives, defining the current state-of-the-art in all three databases. UR - http://dx.doi.org/10.1109/TAFFC.2017.2713783 N1 - HUPBA; 602.143; 602.133 ID - Fatemeh Noroozi2019 ER -