toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Akhil Gurram; Ahmet Faruk Tuna; Fengyi Shen; Onay Urfalioglu; Antonio Lopez edit   pdf
doi  openurl
  Title Monocular Depth Estimation through Virtual-world Supervision and Real-world SfM Self-Supervision Type Journal Article
  Year 2021 Publication IEEE Transactions on Intelligent Transportation Systems Abbreviated Journal TITS  
  Volume 23 Issue 8 Pages 12738-12751  
  Keywords  
  Abstract Depth information is essential for on-board perception in autonomous driving and driver assistance. Monocular depth estimation (MDE) is very appealing since it allows for appearance and depth being on direct pixelwise correspondence without further calibration. Best MDE models are based on Convolutional Neural Networks (CNNs) trained in a supervised manner, i.e., assuming pixelwise ground truth (GT). Usually, this GT is acquired at training time through a calibrated multi-modal suite of sensors. However, also using only a monocular system at training time is cheaper and more scalable. This is possible by relying on structure-from-motion (SfM) principles to generate self-supervision. Nevertheless, problems of camouflaged objects, visibility changes, static-camera intervals, textureless areas, and scale ambiguity, diminish the usefulness of such self-supervision. In this paper, we perform monocular depth estimation by virtual-world supervision (MonoDEVS) and real-world SfM self-supervision. We compensate the SfM self-supervision limitations by leveraging virtual-world images with accurate semantic and depth supervision and addressing the virtual-to-real domain gap. Our MonoDEVSNet outperforms previous MDE CNNs trained on monocular and even stereo sequences.  
  Address  
  Corporate Author Thesis  
  Publisher (down) Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.118 Approved no  
  Call Number Admin @ si @ GTS2021 Serial 3598  
Permanent link to this record
 

 
Author Debora Gil; Guillermo Torres edit   pdf
openurl 
  Title A multi-shape loss function with adaptive class balancing for the segmentation of lung structures Type Conference Article
  Year 2020 Publication 34th International Congress and Exhibition on Computer Assisted Radiology & Surgery Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address Virtual; June 2020  
  Corporate Author Thesis  
  Publisher (down) Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CARS  
  Notes IAM; 600.139; 600.145 Approved no  
  Call Number Admin @ si @ GiT2020 Serial 3472  
Permanent link to this record
 

 
Author Debora Gil; Oriol Ramos Terrades; Raquel Perez edit   pdf
openurl 
  Title Topological Radiomics (TOPiomics): Early Detection of Genetic Abnormalities in Cancer Treatment Evolution Type Conference Article
  Year 2020 Publication Women in Geometry and Topology Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address Barcelona; September 2019  
  Corporate Author Thesis  
  Publisher (down) Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes IAM; DAG; 600.139; 600.145; 600.121 Approved no  
  Call Number Admin @ si @ GRP2020 Serial 3473  
Permanent link to this record
 

 
Author Debora Gil; Katerine Diaz; Carles Sanchez; Aura Hernandez-Sabate edit   pdf
url  openurl
  Title Early Screening of SARS-CoV-2 by Intelligent Analysis of X-Ray Images Type Miscellaneous
  Year 2020 Publication Arxiv Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Future SARS-CoV-2 virus outbreak COVID-XX might possibly occur during the next years. However the pathology in humans is so recent that many clinical aspects, like early detection of complications, side effects after recovery or early screening, are currently unknown. In spite of the number of cases of COVID-19, its rapid spread putting many sanitary systems in the edge of collapse has hindered proper collection and analysis of the data related to COVID-19 clinical aspects. We describe an interdisciplinary initiative that integrates clinical research, with image diagnostics and the use of new technologies such as artificial intelligence and radiomics with the aim of clarifying some of SARS-CoV-2 open questions. The whole initiative addresses 3 main points: 1) collection of standardize data including images, clinical data and analytics; 2) COVID-19 screening for its early diagnosis at primary care centers; 3) define radiomic signatures of COVID-19 evolution and associated pathologies for the early treatment of complications. In particular, in this paper we present a general overview of the project, the experimental design and first results of X-ray COVID-19 detection using a classic approach based on HoG and feature selection. Our experiments include a comparison to some recent methods for COVID-19 screening in X-Ray and an exploratory analysis of the feasibility of X-Ray COVID-19 screening. Results show that classic approaches can outperform deep-learning methods in this experimental setting, indicate the feasibility of early COVID-19 screening and that non-COVID infiltration is the group of patients most similar to COVID-19 in terms of radiological description of X-ray. Therefore, an efficient COVID-19 screening should be complemented with other clinical data to better discriminate these cases.  
  Address  
  Corporate Author Thesis  
  Publisher (down) Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes IAM; 600.139; 600.145; 601.337 Approved no  
  Call Number Admin @ si @ GDS2020 Serial 3474  
Permanent link to this record
 

 
Author Oriol Ramos Terrades; Albert Berenguel; Debora Gil edit   pdf
url  openurl
  Title A flexible outlier detector based on a topology given by graph communities Type Miscellaneous
  Year 2020 Publication Arxiv Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Outlier, or anomaly, detection is essential for optimal performance of machine learning methods and statistical predictive models. It is not just a technical step in a data cleaning process but a key topic in many fields such as fraudulent document detection, in medical applications and assisted diagnosis systems or detecting security threats. In contrast to population-based methods, neighborhood based local approaches are simple flexible methods that have the potential to perform well in small sample size unbalanced problems. However, a main concern of local approaches is the impact that the computation of each sample neighborhood has on the method performance. Most approaches use a distance in the feature space to define a single neighborhood that requires careful selection of several parameters. This work presents a local approach based on a local measure of the heterogeneity of sample labels in the feature space considered as a topological manifold. Topology is computed using the communities of a weighted graph codifying mutual nearest neighbors in the feature space. This way, we provide with a set of multiple neighborhoods able to describe the structure of complex spaces without parameter fine tuning. The extensive experiments on real-world data sets show that our approach overall outperforms, both, local and global strategies in multi and single view settings.  
  Address  
  Corporate Author Thesis  
  Publisher (down) Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes IAM; DAG; 600.139; 600.145; 600.140; 600.121 Approved no  
  Call Number Admin @ si @ RBG2020 Serial 3475  
Permanent link to this record
 

 
Author Diego Porres edit   pdf
url  openurl
  Title Discriminator Synthesis: On reusing the other half of Generative Adversarial Networks Type Conference Article
  Year 2021 Publication Machine Learning for Creativity and Design, Neurips Workshop Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Generative Adversarial Networks have long since revolutionized the world of computer vision and, tied to it, the world of art. Arduous efforts have gone into fully utilizing and stabilizing training so that outputs of the Generator network have the highest possible fidelity, but little has gone into using the Discriminator after training is complete. In this work, we propose to use the latter and show a way to use the features it has learned from the training dataset to both alter an image and generate one from scratch. We name this method Discriminator Dreaming, and the full code can be found at this https URL.  
  Address Virtual; December 2021  
  Corporate Author Thesis  
  Publisher (down) Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference NEURIPSW  
  Notes ADAS; 601.365 Approved no  
  Call Number Admin @ si @ Por2021 Serial 3597  
Permanent link to this record
 

 
Author Riccardo Del Chiaro; Bartlomiej Twardowski; Andrew Bagdanov; Joost Van de Weijer edit   pdf
openurl 
  Title Recurrent attention to transient tasks for continual image captioning Type Conference Article
  Year 2020 Publication 34th Conference on Neural Information Processing Systems Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Research on continual learning has led to a variety of approaches to mitigating catastrophic forgetting in feed-forward classification networks. Until now surprisingly little attention has been focused on continual learning of recurrent models applied to problems like image captioning. In this paper we take a systematic look at continual learning of LSTM-based models for image captioning. We propose an attention-based approach that explicitly accommodates the transient nature of vocabularies in continual image captioning tasks -- i.e. that task vocabularies are not disjoint. We call our method Recurrent Attention to Transient Tasks (RATT), and also show how to adapt continual learning approaches based on weight egularization and knowledge distillation to recurrent continual learning problems. We apply our approaches to incremental image captioning problem on two new continual learning benchmarks we define using the MS-COCO and Flickr30 datasets. Our results demonstrate that RATT is able to sequentially learn five captioning tasks while incurring no forgetting of previously learned ones.  
  Address virtual; December 2020  
  Corporate Author Thesis  
  Publisher (down) Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference NEURIPS  
  Notes LAMP; 600.120 Approved no  
  Call Number Admin @ si @ CTB2020 Serial 3484  
Permanent link to this record
 

 
Author Yaxing Wang; Lu Yu; Joost Van de Weijer edit   pdf
openurl 
  Title DeepI2I: Enabling Deep Hierarchical Image-to-Image Translation by Transferring from GANs Type Conference Article
  Year 2020 Publication 34th Conference on Neural Information Processing Systems Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Image-to-image translation has recently achieved remarkable results. But despite current success, it suffers from inferior performance when translations between classes require large shape changes. We attribute this to the high-resolution bottlenecks which are used by current state-of-the-art image-to-image methods. Therefore, in this work, we propose a novel deep hierarchical Image-to-Image Translation method, called DeepI2I. We learn a model by leveraging hierarchical features: (a) structural information contained in the shallow layers and (b) semantic information extracted from the deep layers. To enable the training of deep I2I models on small datasets, we propose a novel transfer learning method, that transfers knowledge from pre-trained GANs. Specifically, we leverage the discriminator of a pre-trained GANs (i.e. BigGAN or StyleGAN) to initialize both the encoder and the discriminator and the pre-trained generator to initialize the generator of our model. Applying knowledge transfer leads to an alignment problem between the encoder and generator. We introduce an adaptor network to address this. On many-class image-to-image translation on three datasets (Animal faces, Birds, and Foods) we decrease mFID by at least 35% when compared to the state-of-the-art. Furthermore, we qualitatively and quantitatively demonstrate that transfer learning significantly improves the performance of I2I systems, especially for small datasets. Finally, we are the first to perform I2I translations for domains with over 100 classes.  
  Address virtual; December 2020  
  Corporate Author Thesis  
  Publisher (down) Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference NEURIPS  
  Notes LAMP; 600.120 Approved no  
  Call Number Admin @ si @ WYW2020 Serial 3485  
Permanent link to this record
 

 
Author Yaxing Wang; Salman Khan; Abel Gonzalez-Garcia; Joost Van de Weijer; Fahad Shahbaz Khan edit   pdf
openurl 
  Title Semi-supervised Learning for Few-shot Image-to-Image Translation Type Conference Article
  Year 2020 Publication 33rd IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract In the last few years, unpaired image-to-image translation has witnessed remarkable progress. Although the latest methods are able to generate realistic images, they crucially rely on a large number of labeled images. Recently, some methods have tackled the challenging setting of few-shot image-to-image translation, reducing the labeled data requirements for the target domain during inference. In this work, we go one step further and reduce the amount of required labeled data also from the source domain during training. To do so, we propose applying semi-supervised learning via a noise-tolerant pseudo-labeling procedure. We also apply a cycle consistency constraint to further exploit the information from unlabeled images, either from the same dataset or external. Additionally, we propose several structural modifications to facilitate the image translation task under these circumstances. Our semi-supervised method for few-shot image translation, called SEMIT, achieves excellent results on four different datasets using as little as 10% of the source labels, and matches the performance of the main fully-supervised competitor using only 20% labeled data. Our code and models are made public at: this https URL.  
  Address Virtual; June 2020  
  Corporate Author Thesis  
  Publisher (down) Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPR  
  Notes LAMP; 600.120 Approved no  
  Call Number Admin @ si @ WKG2020 Serial 3486  
Permanent link to this record
 

 
Author Yi Xiao; Felipe Codevilla; Christopher Pal; Antonio Lopez edit   pdf
openurl 
  Title Action-Based Representation Learning for Autonomous Driving Type Conference Article
  Year 2020 Publication Conference on Robot Learning Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Human drivers produce a vast amount of data which could, in principle, be used to improve autonomous driving systems. Unfortunately, seemingly straightforward approaches for creating end-to-end driving models that map sensor data directly into driving actions are problematic in terms of interpretability, and typically have significant difficulty dealing with spurious correlations. Alternatively, we propose to use this kind of action-based driving data for learning representations. Our experiments show that an affordance-based driving model pre-trained with this approach can leverage a relatively small amount of weakly annotated imagery and outperform pure end-to-end driving models, while being more interpretable. Further, we demonstrate how this strategy outperforms previous methods based on learning inverse dynamics models as well as other methods based on heavy human supervision (ImageNet).  
  Address virtual; November 2020  
  Corporate Author Thesis  
  Publisher (down) Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CORL  
  Notes ADAS; 600.118 Approved no  
  Call Number Admin @ si @ XCP2020 Serial 3487  
Permanent link to this record
 

 
Author Gabriel Villalonga; Antonio Lopez edit   pdf
doi  openurl
  Title Co-Training for On-Board Deep Object Detection Type Journal Article
  Year 2020 Publication IEEE Access Abbreviated Journal ACCESS  
  Volume Issue Pages 194441 - 194456  
  Keywords  
  Abstract Providing ground truth supervision to train visual models has been a bottleneck over the years, exacerbated by domain shifts which degenerate the performance of such models. This was the case when visual tasks relied on handcrafted features and shallow machine learning and, despite its unprecedented performance gains, the problem remains open within the deep learning paradigm due to its data-hungry nature. Best performing deep vision-based object detectors are trained in a supervised manner by relying on human-labeled bounding boxes which localize class instances (i.e. objects) within the training images. Thus, object detection is one of such tasks for which human labeling is a major bottleneck. In this article, we assess co-training as a semi-supervised learning method for self-labeling objects in unlabeled images, so reducing the human-labeling effort for developing deep object detectors. Our study pays special attention to a scenario involving domain shift; in particular, when we have automatically generated virtual-world images with object bounding boxes and we have real-world images which are unlabeled. Moreover, we are particularly interested in using co-training for deep object detection in the context of driver assistance systems and/or self-driving vehicles. Thus, using well-established datasets and protocols for object detection in these application contexts, we will show how co-training is a paradigm worth to pursue for alleviating object labeling, working both alone and together with task-agnostic domain adaptation.  
  Address  
  Corporate Author Thesis  
  Publisher (down) Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.118 Approved no  
  Call Number Admin @ si @ ViL2020 Serial 3488  
Permanent link to this record
 

 
Author Hannes Mueller; Andre Groger; Jonathan Hersh; Andrea Matranga; Joan Serrat edit   pdf
url  openurl
  Title Monitoring War Destruction from Space: A Machine Learning Approach Type Miscellaneous
  Year 2020 Publication Arxiv Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Existing data on building destruction in conflict zones rely on eyewitness reports or manual detection, which makes it generally scarce, incomplete and potentially biased. This lack of reliable data imposes severe limitations for media reporting, humanitarian relief efforts, human rights monitoring, reconstruction initiatives, and academic studies of violent conflict. This article introduces an automated method of measuring destruction in high-resolution satellite images using deep learning techniques combined with data augmentation to expand training samples. We apply this method to the Syrian civil war and reconstruct the evolution of damage in major cities across the country. The approach allows generating destruction data with unprecedented scope, resolution, and frequency – only limited by the available satellite imagery – which can alleviate data limitations decisively.  
  Address  
  Corporate Author Thesis  
  Publisher (down) Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.118 Approved no  
  Call Number Admin @ si @ MGH2020 Serial 3489  
Permanent link to this record
 

 
Author Yi Xiao; Felipe Codevilla; Akhil Gurram; Onay Urfalioglu; Antonio Lopez edit   pdf
url  doi
openurl 
  Title Multimodal end-to-end autonomous driving Type Journal Article
  Year 2020 Publication IEEE Transactions on Intelligent Transportation Systems Abbreviated Journal TITS  
  Volume Issue Pages 1-11  
  Keywords  
  Abstract A crucial component of an autonomous vehicle (AV) is the artificial intelligence (AI) is able to drive towards a desired destination. Today, there are different paradigms addressing the development of AI drivers. On the one hand, we find modular pipelines, which divide the driving task into sub-tasks such as perception and maneuver planning and control. On the other hand, we find end-to-end driving approaches that try to learn a direct mapping from input raw sensor data to vehicle control signals. The later are relatively less studied, but are gaining popularity since they are less demanding in terms of sensor data annotation. This paper focuses on end-to-end autonomous driving. So far, most proposals relying on this paradigm assume RGB images as input sensor data. However, AVs will not be equipped only with cameras, but also with active sensors providing accurate depth information (e.g., LiDARs). Accordingly, this paper analyses whether combining RGB and depth modalities, i.e. using RGBD data, produces better end-to-end AI drivers than relying on a single modality. We consider multimodality based on early, mid and late fusion schemes, both in multisensory and single-sensor (monocular depth estimation) settings. Using the CARLA simulator and conditional imitation learning (CIL), we show how, indeed, early fusion multimodality outperforms single-modality.  
  Address  
  Corporate Author Thesis  
  Publisher (down) Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number Admin @ si @ XCG2020 Serial 3490  
Permanent link to this record
 

 
Author Andres Mafla; Sounak Dey; Ali Furkan Biten; Lluis Gomez; Dimosthenis Karatzas edit   pdf
doi  openurl
  Title Multi-modal reasoning graph for scene-text based fine-grained image classification and retrieval Type Conference Article
  Year 2021 Publication IEEE Winter Conference on Applications of Computer Vision Abbreviated Journal  
  Volume Issue Pages 4022-4032  
  Keywords  
  Abstract  
  Address Virtual; January 2021  
  Corporate Author Thesis  
  Publisher (down) Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference WACV  
  Notes DAG; 600.121 Approved no  
  Call Number Admin @ si @ MDB2021 Serial 3491  
Permanent link to this record
 

 
Author Andres Mafla; Rafael S. Rezende; Lluis Gomez; Diana Larlus; Dimosthenis Karatzas edit   pdf
doi  openurl
  Title StacMR: Scene-Text Aware Cross-Modal Retrieval Type Conference Article
  Year 2021 Publication IEEE Winter Conference on Applications of Computer Vision Abbreviated Journal  
  Volume Issue Pages 2219-2229  
  Keywords  
  Abstract  
  Address Virtual; January 2021  
  Corporate Author Thesis  
  Publisher (down) Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference WACV  
  Notes DAG; 600.121 Approved no  
  Call Number Admin @ si @ MRG2021a Serial 3492  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: