%0 Conference Proceedings %T Combining Models from Multiple Sources for RGB-D Scene Recognition %A Xinhang Song %A Shuqiang Jiang %A Luis Herranz %B 26th International Joint Conference on Artificial Intelligence %D 2017 %F Xinhang Song2017 %O LAMP; 600.120 %O exported from refbase (http://refbase.cvc.uab.es/show.php?record=2966), last updated on Thu, 04 Apr 2019 12:30:20 +0200 %X Depth can complement RGB with useful cues about object volumes and scene layout. However, RGB-D image datasets are still too small for directly training deep convolutional neural networks (CNNs), in contrast to the massive monomodal RGB datasets. Previous works in RGB-D recognition typically combine two separate networks for RGB and depth data, pretrained with a large RGB dataset and then fine tuned to the respective target RGB and depth datasets. These approaches have several limitations: 1) only use low-level filters learned from RGB data, thus not being able to exploit properly depth-specific patterns, and 2) RGB and depth features are only combined at high-levels but rarely at lower-levels. In this paper, we propose a framework that leverages both knowledge acquired from large RGB datasets together with depth-specific cues learned from the limited depth data, obtaining more effective multi-source and multi-modal representations. We propose a multi-modal combination method that selects discriminative combinations of layers from the different source models and target modalities, capturing both high-level properties of the task and intrinsic low-level properties of both modalities. %K Robotics and Vision %K Vision and Perception %U http://refbase.cvc.uab.es/files/SJH2017b.pdf %U http://dx.doi.org/10.24963/ijcai.2017/631 %P 4523-4529