TY - CHAP AU - Raul Gomez AU - Lluis Gomez AU - Jaume Gibert AU - Dimosthenis Karatzas PY - 2019// TI - Self-Supervised Learning from Web Data for Multimodal Retrieval BT - Multi-Modal Scene Understanding Book SP - 279 EP - 306 KW - self-supervised learning KW - webly supervised learning KW - text embeddings KW - multimodal retrieval KW - multimodal embedding N2 - Self-Supervised learning from multimodal image and text data allows deep neural networks to learn powerful features with no need of human annotated data. Web and Social Media platforms provide a virtually unlimited amount of this multimodal data. In this work we propose to exploit this free available data to learn a multimodal image and text embedding, aiming to leverage the semantic knowledge learnt in the text domain and transfer it to a visual model for semantic image retrieval. We demonstrate that the proposed pipeline can learn from images with associated text without supervision and analyze the semantic structure of the learnt joint image and text embeddingspace. Weperformathoroughanalysisandperformancecomparisonoffivedifferentstateof the art text embeddings in three different benchmarks. We show that the embeddings learnt with Web and Social Media data have competitive performances over supervised methods in the text basedimageretrievaltask,andweclearlyoutperformstateoftheartintheMIRFlickrdatasetwhen training in the target data. Further, we demonstrate how semantic multimodal image retrieval can be performed using the learnt embeddings, going beyond classical instance-level retrieval problems. Finally, we present a new dataset, InstaCities1M, composed by Instagram images and their associated texts that can be used for fair comparison of image-text embeddings. UR - https://www.sciencedirect.com/science/article/pii/B9780128173589000159 L1 - http://refbase.cvc.uab.es/files/GGG2019.pdf N1 - DAG; 600.129; 601.338; 601.310 ID - Raul Gomez2019 ER -