TY - CONF AU - Xinhang Song AU - Shuqiang Jiang AU - Luis Herranz A2 - IJCAI PY - 2017// TI - Combining Models from Multiple Sources for RGB-D Scene Recognition BT - 26th International Joint Conference on Artificial Intelligence SP - 4523 EP - 4529 KW - Robotics and Vision KW - Vision and Perception N2 - Depth can complement RGB with useful cues about object volumes and scene layout. However, RGB-D image datasets are still too small for directly training deep convolutional neural networks (CNNs), in contrast to the massive monomodal RGB datasets. Previous works in RGB-D recognition typically combine two separate networks for RGB and depth data, pretrained with a large RGB dataset and then fine tuned to the respective target RGB and depth datasets. These approaches have several limitations: 1) only use low-level filters learned from RGB data, thus not being able to exploit properly depth-specific patterns, and 2) RGB and depth features are only combined at high-levels but rarely at lower-levels. In this paper, we propose a framework that leverages both knowledge acquired from large RGB datasets together with depth-specific cues learned from the limited depth data, obtaining more effective multi-source and multi-modal representations. We propose a multi-modal combination method that selects discriminative combinations of layers from the different source models and target modalities, capturing both high-level properties of the task and intrinsic low-level properties of both modalities. L1 - http://refbase.cvc.uab.es/files/SJH2017b.pdf UR - http://dx.doi.org/10.24963/ijcai.2017/631 N1 - LAMP; 600.120 ID - Xinhang Song2017 ER -