%0 Conference Proceedings %T Tex-Nets: Binary Patterns Encoded Convolutional Neural Networks for Texture Recognition %A Muhammad Anwer Rao %A Fahad Shahbaz Khan %A Joost Van de Weijer %A Jorma Laaksonen %B 19th International Conference on Multimodal Interaction %D 2017 %F Muhammad Anwer Rao2017 %O LAMP; 600.109; 600.068; 600.120 %O exported from refbase (http://refbase.cvc.uab.es/show.php?record=3038), last updated on Fri, 04 Feb 2022 13:19:29 +0100 %X Recognizing materials and textures in realistic imaging conditions is a challenging computer vision problem. For many years, local features based orderless representations were a dominant approach for texture recognition. Recently deep local features, extracted from the intermediate layers of a Convolutional Neural Network (CNN), are used as filter banks. These dense local descriptors from a deep model, when encoded with Fisher Vectors, have shown to provide excellent results for texture recognition. The CNN models, employed in such approaches, take RGB patches as input and train on a large amount of labeled images. We show that CNN models, which we call TEX-Nets, trained using mapped coded images with explicit texture information provide complementary information to the standard deep models trained on RGB patches. We further investigate two deep architectures, namely early and late fusion, to combine the texture and color information. Experiments on benchmark texture datasets clearly demonstrate that TEX-Nets provide complementary information to standard RGB deep network. Our approach provides a large gain of 4.8%, 3.5%, 2.6% and 4.1% respectively in accuracy on the DTD, KTH-TIPS-2a, KTH-TIPS-2b and Texture-10 datasets, compared to the standard RGB network of the same architecture. Further, our final combination leads to consistent improvements over the state-of-the-art on all four datasets. %K Convolutional Neural Networks %K Texture Recognition %K Local Binary Paterns %U http://refbase.cvc.uab.es/files/RKW2017.pdf %U http://dx.doi.org/10.1145/3078971.3079001