|
Yaxing Wang, L. Zhang, & Joost Van de Weijer. (2016). Ensembles of generative adversarial networks. In 30th Annual Conference on Neural Information Processing Systems Worshops.
Abstract: Ensembles are a popular way to improve results of discriminative CNNs. The
combination of several networks trained starting from different initializations
improves results significantly. In this paper we investigate the usage of ensembles of GANs. The specific nature of GANs opens up several new ways to construct ensembles. The first one is based on the fact that in the minimax game which is played to optimize the GAN objective the generator network keeps on changing even after the network can be considered optimal. As such ensembles of GANs can be constructed based on the same network initialization but just taking models which have different amount of iterations. These so-called self ensembles are much faster to train than traditional ensembles. The second method, called cascade GANs, redirects part of the training data which is badly modeled by the first GAN to another GAN. In experiments on the CIFAR10 dataset we show that ensembles of GANs obtain model probability distributions which better model the data distribution. In addition, we show that these improved results can be obtained at little additional computational cost.
|
|
|
Guim Perarnau, Joost Van de Weijer, Bogdan Raducanu, & Jose Manuel Alvarez. (2016). Invertible conditional gans for image editing. In 30th Annual Conference on Neural Information Processing Systems Worshops.
Abstract: Generative Adversarial Networks (GANs) have recently demonstrated to successfully approximate complex data distributions. A relevant extension of this model is conditional GANs (cGANs), where the introduction of external information allows to determine specific representations of the generated images. In this work, we evaluate encoders to inverse the mapping of a cGAN, i.e., mapping a real image into a latent space and a conditional representation. This allows, for example, to reconstruct and modify real images of faces conditioning on arbitrary attributes.
Additionally, we evaluate the design of cGANs. The combination of an encoder
with a cGAN, which we call Invertible cGAN (IcGAN), enables to re-generate real
images with deterministic complex modifications.
|
|
|
Xavier Baro, Sergio Escalera, Isabelle Guyon, Julio C. S. Jacques Junior, Lukasz Romaszko, Lisheng Sun, et al. (2016). Coompetitions in machine learning: case studies. In 30th Annual Conference on Neural Information Processing Systems Worshops.
|
|
|
Ivet Rafegas, & Maria Vanrell. (2016). Colour Visual Coding in trained Deep Neural Networks. In European Conference on Visual Perception.
|
|
|
Arash Akbarinia, & C. Alejandro Parraga. (2016). Dynamically Adjusted Surround Contrast Enhances Boundary Detection, European Conference on Visual Perception. In European Conference on Visual Perception.
|
|
|
Carles Sanchez, Debora Gil, R. Tazi, Jorge Bernal, Y. Ruiz, L. Planas, et al. (2015). Quasi-real time digital assessment of Central Airway Obstruction. In 3rd European congress for bronchology and interventional pulmonology ECBIP2015.
|
|
|
Fernando Vilariño, Dan Norton, & Onur Ferhat. (2016). The Eye Doesn't Click – Eyetracking and Digital Content Interaction. In 4S/EASST Conference.
|
|
|
Marçal Rusiñol, & Josep Llados. (2009). Logo Spotting by a Bag-of-words Approach for Document Categorization. In 10th International Conference on Document Analysis and Recognition (111–115).
Abstract: In this paper we present a method for document categorization which processes incoming document images such as invoices or receipts. The categorization of these document images is done in terms of the presence of a certain graphical logo detected without segmentation. The graphical logos are described by a set of local features and the categorization of the documents is performed by the use of a bag-of-words model. Spatial coherence rules are added to reinforce the correct category hypothesis, aiming also to spot the logo inside the document image. Experiments which demonstrate the effectiveness of this system on a large set of real data are presented.
|
|
|
Debora Gil, Agnes Borras, Manuel Ballester, Francesc Carreras, Ruth Aris, Manuel Vazquez, et al. (2011). MIOCARDIA: Integrating cardiac function and muscular architecture for a better diagnosis. In Association for Computing Machinery (Ed.), 14th International Symposium on Applied Sciences in Biomedical and Communication Technologies. Barcelona, Spain.
Abstract: Deep understanding of myocardial structure of the heart would unravel crucial knowledge for clinical and medical procedures. The MIOCARDIA project is a multidisciplinary project in cooperation with l'Hospital de la Santa Creu i de Sant Pau, Clinica la Creu Blanca and Barcelona Supercomputing Center. The ultimate goal of this project is defining a computational model of the myocardium. The model takes into account the deep interrelation between the anatomy and the mechanics of the heart. The paper explains the workflow of the MIOCARDIA project. It also introduces a multiresolution reconstruction technique based on DT-MRI streamlining for simplified global myocardial model generation. Our reconstructions can restore the most complex myocardial structures and provides evidences of a global helical organization.
|
|
|
D. Perez, L. Tarazon, N. Serrano, F.M. Castro, Oriol Ramos Terrades, & A. Juan. (2009). The GERMANA Database. In 10th International Conference on Document Analysis and Recognition (pp. 301–305).
Abstract: A new handwritten text database, GERMANA, is presented to facilitate empirical comparison of different approaches to text line extraction and off-line handwriting recognition. GERMANA is the result of digitising and annotating a 764-page Spanish manuscript from 1891, in which most pages only contain nearly calligraphed text written on ruled sheets of well-separated lines. To our knowledge, it is the first publicly available database for handwriting research, mostly written in Spanish and comparable in size to standard databases. Due to its sequential book structure, it is also well-suited for realistic assessment of interactive handwriting recognition systems. To provide baseline results for reference in future studies, empirical results are also reported, using standard techniques and tools for preprocessing, feature extraction, HMM-based image modelling, and language modelling.
|
|
|
Victor Campmany, Sergio Silva, Juan Carlos Moure, Antoni Espinosa, David Vazquez, & Antonio Lopez. (2015). GPU-based pedestrian detection for autonomous driving. In Programming and Tunning Massive Parallel Systems. PUMPS.
Abstract: Pedestrian detection for autonomous driving has gained a lot of prominence during the last few years. Besides the fact that it is one of the hardest tasks within computer vision, it involves huge computational costs. The real-time constraints in the field are tight, and regular processors are not able to handle the workload obtaining an acceptable ratio of frames per second (fps). Moreover, multiple cameras are required to obtain accurate results, so the need to speed up the process is even higher. Taking the work in [1] as our baseline, we propose a CUDA implementation of a pedestrian detection system. Further, we introduce significant algorithmic adjustments and optimizations to adapt the problem to the GPU architecture. The aim is to provide a system capable of running in real-time obtaining reliable results.
Keywords: Autonomous Driving; ADAS; CUDA; Pedestrian Detection
|
|
|
Sergio Silva, Victor Campmany, Laura Sellart, Juan Carlos Moure, Antoni Espinosa, David Vazquez, et al. (2015). Autonomous GPU-based Driving. In Programming and Tunning Massive Parallel Systems.
Abstract: Human factors cause most driving accidents; this is why nowadays is common to hear about autonomous driving as an alternative. Autonomous driving will not only increase safety, but also will develop a system of cooperative self-driving cars that will reduce pollution and congestion. Furthermore, it will provide more freedom to handicapped people, elderly or kids.
Autonomous Driving requires perceiving and understanding the vehicle environment (e.g., road, traffic signs, pedestrians, vehicles) using sensors (e.g., cameras, lidars, sonars, and radars), selflocalization (requiring GPS, inertial sensors and visual localization in precise maps), controlling the vehicle and planning the routes. These algorithms require high computation capability, and thanks to NVIDIA GPU acceleration this starts to become feasible.
NVIDIA® is developing a new platform for boosting the Autonomous Driving capabilities that is able of managing the vehicle via CAN-Bus: the Drive™ PX. It has 8 ARM cores with dual accelerated Tegra® X1 chips. It has 12 synchronized camera inputs for 360º vehicle perception, 4G and Wi-Fi capabilities allowing vehicle communications and GPS and inertial sensors inputs for self-localization.
Our research group has been selected for testing Drive™ PX. Accordingly, we are developing a Drive™ PX based autonomous car. Currently, we are porting our previous CPU based algorithms (e.g., Lane Departure Warning, Collision Warning, Automatic Cruise Control, Pedestrian Protection, or Semantic Segmentation) for running in the GPU.
Keywords: Autonomous Driving; ADAS; CUDA
|
|
|
Debora Gil, & Antoni Rosell. (2019). Advances in Artificial Intelligence – How Lung Cancer CT Screening Will Progress? In World Lung Cancer Conference.
Abstract: Invited speaker
|
|
|
Debora Gil, Oriol Ramos Terrades, & Raquel Perez. (2020). Topological Radiomics (TOPiomics): Early Detection of Genetic Abnormalities in Cancer Treatment Evolution. In Women in Geometry and Topology.
|
|
|
Alvaro Peris, Marc Bolaños, Petia Radeva, & Francisco Casacuberta. (2016). Video Description Using Bidirectional Recurrent Neural Networks. In 25th International Conference on Artificial Neural Networks (Vol. 2, pp. 3–11).
Abstract: Although traditionally used in the machine translation field, the encoder-decoder framework has been recently applied for the generation of video and image descriptions. The combination of Convolutional and Recurrent Neural Networks in these models has proven to outperform the previous state of the art, obtaining more accurate video descriptions. In this work we propose pushing further this model by introducing two contributions into the encoding stage. First, producing richer image representations by combining object and location information from Convolutional Neural Networks and second, introducing Bidirectional Recurrent Neural Networks for capturing both forward and backward temporal relationships in the input frames.
Keywords: Video description; Neural Machine Translation; Birectional Recurrent Neural Networks; LSTM; Convolutional Neural Networks
|
|