|
Hannes Mueller, Andre Groger, Jonathan Hersh, Andrea Matranga, & Joan Serrat. (2020). Monitoring War Destruction from Space: A Machine Learning Approach.
Abstract: Existing data on building destruction in conflict zones rely on eyewitness reports or manual detection, which makes it generally scarce, incomplete and potentially biased. This lack of reliable data imposes severe limitations for media reporting, humanitarian relief efforts, human rights monitoring, reconstruction initiatives, and academic studies of violent conflict. This article introduces an automated method of measuring destruction in high-resolution satellite images using deep learning techniques combined with data augmentation to expand training samples. We apply this method to the Syrian civil war and reconstruct the evolution of damage in major cities across the country. The approach allows generating destruction data with unprecedented scope, resolution, and frequency – only limited by the available satellite imagery – which can alleviate data limitations decisively.
|
|
|
Yi Xiao, Felipe Codevilla, Akhil Gurram, Onay Urfalioglu, & Antonio Lopez. (2020). Multimodal end-to-end autonomous driving. TITS - IEEE Transactions on Intelligent Transportation Systems, , 1–11.
Abstract: A crucial component of an autonomous vehicle (AV) is the artificial intelligence (AI) is able to drive towards a desired destination. Today, there are different paradigms addressing the development of AI drivers. On the one hand, we find modular pipelines, which divide the driving task into sub-tasks such as perception and maneuver planning and control. On the other hand, we find end-to-end driving approaches that try to learn a direct mapping from input raw sensor data to vehicle control signals. The later are relatively less studied, but are gaining popularity since they are less demanding in terms of sensor data annotation. This paper focuses on end-to-end autonomous driving. So far, most proposals relying on this paradigm assume RGB images as input sensor data. However, AVs will not be equipped only with cameras, but also with active sensors providing accurate depth information (e.g., LiDARs). Accordingly, this paper analyses whether combining RGB and depth modalities, i.e. using RGBD data, produces better end-to-end AI drivers than relying on a single modality. We consider multimodality based on early, mid and late fusion schemes, both in multisensory and single-sensor (monocular depth estimation) settings. Using the CARLA simulator and conditional imitation learning (CIL), we show how, indeed, early fusion multimodality outperforms single-modality.
|
|
|
Andres Mafla, Sounak Dey, Ali Furkan Biten, Lluis Gomez, & Dimosthenis Karatzas. (2021). Multi-modal reasoning graph for scene-text based fine-grained image classification and retrieval. In IEEE Winter Conference on Applications of Computer Vision (pp. 4022–4032).
|
|
|
Andres Mafla, Rafael S. Rezende, Lluis Gomez, Diana Larlus, & Dimosthenis Karatzas. (2021). StacMR: Scene-Text Aware Cross-Modal Retrieval. In IEEE Winter Conference on Applications of Computer Vision (pp. 2219–2229).
|
|
|
Andres Mafla, Ruben Tito, Sounak Dey, Lluis Gomez, Marçal Rusiñol, Ernest Valveny, et al. (2021). Real-time Lexicon-free Scene Text Retrieval. PR - Pattern Recognition, 110, 107656.
Abstract: In this work, we address the task of scene text retrieval: given a text query, the system returns all images containing the queried text. The proposed model uses a single shot CNN architecture that predicts bounding boxes and builds a compact representation of spotted words. In this way, this problem can be modeled as a nearest neighbor search of the textual representation of a query over the outputs of the CNN collected from the totality of an image database. Our experiments demonstrate that the proposed model outperforms previous state-of-the-art, while offering a significant increase in processing speed and unmatched expressiveness with samples never seen at training time. Several experiments to assess the generalization capability of the model are conducted in a multilingual dataset, as well as an application of real-time text spotting in videos.
|
|
|
Lluis Gomez, Anguelos Nicolaou, Marçal Rusiñol, & Dimosthenis Karatzas. (2020). 12 years of ICDAR Robust Reading Competitions: The evolution of reading systems for unconstrained text understanding. In K. Alahari, & C.V. Jawahar (Eds.), Visual Text Interpretation – Algorithms and Applications in Scene Understanding and Document Analysis. Series on Advances in Computer Vision and Pattern Recognition. Springer.
|
|
|
Lluis Gomez, Dena Bazazian, & Dimosthenis Karatzas. (2020). Historical review of scene text detection research. In K. Alahari, & C.V. Jawahar (Eds.), Visual Text Interpretation – Algorithms and Applications in Scene Understanding and Document Analysis. Series on Advances in Computer Vision and Pattern Recognition. Springer.
|
|
|
Jon Almazan, Lluis Gomez, Suman Ghosh, Ernest Valveny, & Dimosthenis Karatzas. (2020). WATTS: A common representation of word images and strings using embedded attributes for text recognition and retrieval. In K. A. Analysis”, & C.V. Jawahar (Eds.), Visual Text Interpretation – Algorithms and Applications in Scene Understanding and Document Analysis. Series on Advances in Computer Vision and Pattern Recognition. Springer.
|
|
|
Raul Gomez, Yahui Liu, Marco de Nadai, Dimosthenis Karatzas, Bruno Lepri, & Nicu Sebe. (2020). Retrieval Guided Unsupervised Multi-domain Image to Image Translation. In 28th ACM International Conference on Multimedia.
Abstract: Image to image translation aims to learn a mapping that transforms an image from one visual domain to another. Recent works assume that images descriptors can be disentangled into a domain-invariant content representation and a domain-specific style representation. Thus, translation models seek to preserve the content of source images while changing the style to a target visual domain. However, synthesizing new images is extremely challenging especially in multi-domain translations, as the network has to compose content and style to generate reliable and diverse images in multiple domains. In this paper we propose the use of an image retrieval system to assist the image-to-image translation task. First, we train an image-to-image translation model to map images to multiple domains. Then, we train an image retrieval model using real and generated images to find images similar to a query one in content but in a different domain. Finally, we exploit the image retrieval system to fine-tune the image-to-image translation model and generate higher quality images. Our experiments show the effectiveness of the proposed solution and highlight the contribution of the retrieval network, which can benefit from additional unlabeled data and help image-to-image translation models in the presence of scarce data.
|
|
|
Minesh Mathew, Dimosthenis Karatzas, & C.V. Jawahar. (2021). DocVQA: A Dataset for VQA on Document Images. In IEEE Winter Conference on Applications of Computer Vision (pp. 2200–2209).
Abstract: We present a new dataset for Visual Question Answering (VQA) on document images called DocVQA. The dataset consists of 50,000 questions defined on 12,000+ document images. Detailed analysis of the dataset in comparison with similar datasets for VQA and reading comprehension is presented. We report several baseline results by adopting existing VQA and reading comprehension models. Although the existing models perform reasonably well on certain types of questions, there is large performance gap compared to human performance (94.36% accuracy). The models need to improve specifically on questions where understanding structure of the document is crucial. The dataset, code and leaderboard are available at docvqa. org
|
|
|
Tomas Sixta, Julio C. S. Jacques Junior, Pau Buch Cardona, Eduard Vazquez, & Sergio Escalera. (2020). FairFace Challenge at ECCV 2020: Analyzing Bias in Face Recognition. In ECCV Workshops (Vol. 12540, pp. 463–481). LNCS.
Abstract: This work summarizes the 2020 ChaLearn Looking at People Fair Face Recognition and Analysis Challenge and provides a description of the top-winning solutions and analysis of the results. The aim of the challenge was to evaluate accuracy and bias in gender and skin colour of submitted algorithms on the task of 1:1 face verification in the presence of other confounding attributes. Participants were evaluated using an in-the-wild dataset based on reannotated IJB-C, further enriched 12.5K new images and additional labels. The dataset is not balanced, which simulates a real world scenario where AI-based models supposed to present fair outcomes are trained and evaluated on imbalanced data. The challenge attracted 151 participants, who made more 1.8K submissions in total. The final phase of the challenge attracted 36 active teams out of which 10 exceeded 0.999 AUC-ROC while achieving very low scores in the proposed bias metrics. Common strategies by the participants were face pre-processing, homogenization of data distributions, the use of bias aware loss functions and ensemble models. The analysis of top-10 teams shows higher false positive rates (and lower false negative rates) for females with dark skin tone as well as the potential of eyeglasses and young age to increase the false positive rates too.
|
|
|
Zhengying Liu, Zhen Xu, Shangeth Rajaa, Meysam Madadi, Julio C. S. Jacques Junior, Sergio Escalera, et al. (2020). Towards Automated Deep Learning: Analysis of the AutoDL challenge series 2019. In Proceedings of Machine Learning Research (Vol. 123, pp. 242–252).
Abstract: We present the design and results of recent competitions in Automated Deep Learning (AutoDL). In the AutoDL challenge series 2019, we organized 5 machine learning challenges: AutoCV, AutoCV2, AutoNLP, AutoSpeech and AutoDL. The first 4 challenges concern each a specific application domain, such as computer vision, natural language processing and speech recognition. At the time of March 2020, the last challenge AutoDL is still on-going and we only present its design. Some highlights of this work include: (1) a benchmark suite of baseline AutoML solutions, with emphasis on domains for which Deep Learning methods have had prior success (image, video, text, speech, etc); (2) a novel any-time learning framework, which opens doors for further theoretical consideration; (3) a repository of around 100 datasets (from all above domains) over half of which are released as public datasets to enable research on meta-learning; (4) analyses revealing that winning solutions generalize to new unseen datasets, validating progress towards universal AutoML solution; (5) open-sourcing of the challenge platform, the starting kit, the dataset formatting toolkit, and all winning solutions (All information available at {autodl.chalearn.org}).
|
|
|
Albert Clapes, Julio C. S. Jacques Junior, Carla Morral, & Sergio Escalera. (2020). ChaLearn LAP 2020 Challenge on Identity-preserved Human Detection: Dataset and Results. In 15th IEEE International Conference on Automatic Face and Gesture Recognition (pp. 801–808).
Abstract: This paper summarizes the ChaLearn Looking at People 2020 Challenge on Identity-preserved Human Detection (IPHD). For the purpose, we released a large novel dataset containing more than 112K pairs of spatiotemporally aligned depth and thermal frames (and 175K instances of humans) sampled from 780 sequences. The sequences contain hundreds of non-identifiable people appearing in a mix of in-the-wild and scripted scenarios recorded in public and private places. The competition was divided into three tracks depending on the modalities exploited for the detection: (1) depth, (2) thermal, and (3) depth-thermal fusion. Color was also captured but only used to facilitate the groundtruth annotation. Still the temporal synchronization of three sensory devices is challenging, so bad temporal matches across modalities can occur. Hence, the labels provided should considered “weak”, although test frames were carefully selected to minimize this effect and ensure the fairest comparison of the participants’ results. Despite this added difficulty, the results got by the participants demonstrate current fully-supervised methods can deal with that and achieve outstanding detection performance when measured in terms of AP@0.50.
|
|
|
Zhengying Liu, Adrien Pavao, Zhen Xu, Sergio Escalera, Isabelle Guyon, Julio C. S. Jacques Junior, et al. (2020). How far are we from true AutoML: reflection from winning solutions and results of AutoDL challenge. In 7th ICML Workshop on Automated Machine Learning.
Abstract: Following the completion of the AutoDL challenge (the final challenge in the ChaLearn
AutoDL challenge series 2019), we investigate winning solutions and challenge results to
answer an important motivational question: how far are we from achieving true AutoML?
On one hand, the winning solutions achieve good (accurate and fast) classification performance on unseen datasets. On the other hand, all winning solutions still contain a
considerable amount of hard-coded knowledge on the domain (or modality) such as image,
video, text, speech and tabular. This form of ad-hoc meta-learning could be replaced by
more automated forms of meta-learning in the future. Organizing a meta-learning challenge could help forging AutoML solutions that generalize to new unseen domains (e.g.
new types of sensor data) as well as gaining insights on the AutoML problem from a more
fundamental point of view. The datasets of the AutoDL challenge are a resource that can
be used for further benchmarks and the code of the winners has been outsourced, which is
a big step towards “democratizing” Deep Learning.
|
|
|
Aymen Azaza, Joost Van de Weijer, Ali Douik, Javad Zolfaghari Bengar, & Marc Masana. (2020). Saliency from High-Level Semantic Image Features. SN - SN Computer Science, 1–12.
Abstract: Top-down semantic information is known to play an important role in assigning saliency. Recently, large strides have been made in improving state-of-the-art semantic image understanding in the fields of object detection and semantic segmentation. Therefore, since these methods have now reached a high-level of maturity, evaluation of the impact of high-level image understanding on saliency estimation is now feasible. We propose several saliency features which are computed from object detection and semantic segmentation results. We combine these features with a standard baseline method for saliency detection to evaluate their importance. Experiments demonstrate that the proposed features derived from object detection and semantic segmentation improve saliency estimation significantly. Moreover, they show that our method obtains state-of-the-art results on (FT, ImgSal, and SOD datasets) and obtains competitive results on four other datasets (ECSSD, PASCAL-S, MSRA-B, and HKU-IS).
|
|