%0 Conference Proceedings %T Self‐supervised learning of visual features through embedding images into text topic spaces %A Lluis Gomez %A Y. Patel %A Marçal Rusiñol %A C.V. Jawahar %A Dimosthenis Karatzas %B 30th IEEE Conference on Computer Vision and Pattern Recognition %D 2017 %F Lluis Gomez2017 %O DAG; 600.084; 600.121 %O exported from refbase (http://refbase.cvc.uab.es/show.php?record=2889), last updated on Mon, 07 Dec 2020 14:28:47 +0100 %X End-to-end training from scratch of current deep architectures for new computer vision problems would require Imagenet-scale datasets, and this is not always possible. In this paper we present a method that is able to take advantage of freely available multi-modal content to train computer vision algorithms without human supervision. We put forward the idea of performing self-supervised learning of visual features by mining a large scale corpus of multi-modal (text and image) documents. We show that discriminative visual features can be learnt efficiently by training a CNN to predict the semantic context in which a particular image is more probable to appear as an illustration. For this we leverage the hidden semantic structures discovered in the text corpus with a well-known topic modeling technique. Our experiments demonstrate state of the art performance in image classification, object detection, and multi-modal retrieval compared to recent self-supervised or natural-supervised approaches. %U https://arxiv.org/abs/1705.08631 %U http://refbase.cvc.uab.es/files/GPR2017.pdf %U http://dx.doi.org/10.1109/CVPR.2017.218