%0 Conference Proceedings %T Seeing and Hearing Egocentric Actions: How Much Can We Learn? %A Alejandro Cartas %A Jordi Luque %A Petia Radeva %A Carlos Segura %A Mariella Dimiccoli %B IEEE International Conference on Computer Vision Workshops %D 2019 %F Alejandro Cartas2019 %O MILAB; no proj %O exported from refbase (http://refbase.cvc.uab.es/show.php?record=3385), last updated on Tue, 31 Mar 2020 11:56:39 +0200 %X Our interaction with the world is an inherently multimodal experience. However, the understanding of human-to-object interactions has historically been addressed focusing on a single modality. In particular, a limited number of works have considered to integrate the visual and audio modalities for this purpose. In this work, we propose a multimodal approach for egocentric action recognition in a kitchen environment that relies on audio and visual information. Our model combines a sparse temporal sampling strategy with a late fusion of audio, spatial, and temporal streams. Experimental results on the EPIC-Kitchens dataset show that multimodal integration leads to better performance than unimodal approaches. In particular, we achieved a 5.18% improvement over the state of the art on verb classification. %U https://ieeexplore.ieee.org/document/9022020 %U http://dx.doi.org/10.1109/ICCVW.2019.00548 %P 4470-4480