Yaxing Wang, L. Zhang, & Joost Van de Weijer. (2016). Ensembles of generative adversarial networks. In 30th Annual Conference on Neural Information Processing Systems Worshops.
Abstract: Ensembles are a popular way to improve results of discriminative CNNs. The
combination of several networks trained starting from different initializations
improves results significantly. In this paper we investigate the usage of ensembles of GANs. The specific nature of GANs opens up several new ways to construct ensembles. The first one is based on the fact that in the minimax game which is played to optimize the GAN objective the generator network keeps on changing even after the network can be considered optimal. As such ensembles of GANs can be constructed based on the same network initialization but just taking models which have different amount of iterations. These so-called self ensembles are much faster to train than traditional ensembles. The second method, called cascade GANs, redirects part of the training data which is badly modeled by the first GAN to another GAN. In experiments on the CIFAR10 dataset we show that ensembles of GANs obtain model probability distributions which better model the data distribution. In addition, we show that these improved results can be obtained at little additional computational cost.
|
Daniel Hernandez, Alejandro Chacon, Antonio Espinosa, David Vazquez, Juan Carlos Moure, & Antonio Lopez. (2016). Embedded real-time stereo estimation via Semi-Global Matching on the GPU. In 16th International Conference on Computational Science (Vol. 80, pp. 143–153).
Abstract: Dense, robust and real-time computation of depth information from stereo-camera systems is a computationally demanding requirement for robotics, advanced driver assistance systems (ADAS) and autonomous vehicles. Semi-Global Matching (SGM) is a widely used algorithm that propagates consistency constraints along several paths across the image. This work presents a real-time system producing reliable disparity estimation results on the new embedded energy-efficient GPU devices. Our design runs on a Tegra X1 at 41 frames per second for an image size of 640x480, 128 disparity levels, and using 4 path directions for the SGM method.
Keywords: Autonomous Driving; Stereo; CUDA; 3d reconstruction
|
Juan Ignacio Toledo, Alicia Fornes, Jordi Cucurull, & Josep Llados. (2016). Election Tally Sheets Processing System. In 12th IAPR Workshop on Document Analysis Systems (pp. 364–368).
Abstract: In paper based elections, manual tallies at polling station level produce myriads of documents. These documents share a common form-like structure and a reduced vocabulary worldwide. On the other hand, each tally sheet is filled by a different writer and on different countries, different scripts are used. We present a complete document analysis system for electoral tally sheet processing combining state of the art techniques with a new handwriting recognition subprocess based on unsupervised feature discovery with Variational Autoencoders and sequence classification with BLSTM neural networks. The whole system is designed to be script independent and allows a fast and reliable results consolidation process with reduced operational cost.
|
G. de Oliveira, Mariella Dimiccoli, & Petia Radeva. (2016). Egocentric Image Retrieval With Deep Convolutional Neural Networks. In 19th International Conference of the Catalan Association for Artificial Intelligence (pp. 71–76).
|
Arash Akbarinia, & C. Alejandro Parraga. (2016). Dynamically Adjusted Surround Contrast Enhances Boundary Detection, European Conference on Visual Perception. In European Conference on Visual Perception.
|
Y. Patel, Lluis Gomez, Marçal Rusiñol, & Dimosthenis Karatzas. (2016). Dynamic Lexicon Generation for Natural Scene Images. In 14th European Conference on Computer Vision Workshops (pp. 395–410).
Abstract: Many scene text understanding methods approach the endtoend recognition problem from a word-spotting perspective and take huge benet from using small per-image lexicons. Such customized lexicons are normally assumed as given and their source is rarely discussed.
In this paper we propose a method that generates contextualized lexicons
for scene images using only visual information. For this, we exploit
the correlation between visual and textual information in a dataset consisting
of images and textual content associated with them. Using the topic modeling framework to discover a set of latent topics in such a dataset allows us to re-rank a xed dictionary in a way that prioritizes the words that are more likely to appear in a given image. Moreover, we train a CNN that is able to reproduce those word rankings but using only the image raw pixels as input. We demonstrate that the quality of the automatically obtained custom lexicons is superior to a generic frequency-based baseline.
Keywords: scene text; photo OCR; scene understanding; lexicon generation; topic modeling; CNN
|
Ozan Caglayan, Walid Aransa, Yaxing Wang, Marc Masana, Mercedes Garcıa-Martinez, Fethi Bougares, et al. (2016). Does Multimodality Help Human and Machine for Translation and Image Captioning? In 1st conference on machine translation.
Abstract: This paper presents the systems developed by LIUM and CVC for the WMT16 Multimodal Machine Translation challenge. We explored various comparative methods, namely phrase-based systems and attentional recurrent neural networks models trained using monomodal or multimodal data. We also performed a human evaluation in order to estimate theusefulness of multimodal data for human machine translation and image description generation. Our systems obtained the best results for both tasks according to the automatic evaluation metrics BLEU and METEOR.
|
Fernando Vilariño. (2016). Dissemination, creation and education from archives: Case study of the collection of Digitized Visual Poems from Joan Brossa Foundation. In International Workshop on Poetry: Archives, Poetries and Receptions.
|
Marc Sunset Perez, Marc Comino Trinidad, Dimosthenis Karatzas, Antonio Chica Calaf, & Pere Pau Vazquez Alcocer. (2016). Development of general‐purpose projection‐based augmented reality systems. IADIs - IADIs international journal on computer science and information systems, 1–18.
Abstract: Despite the large amount of methods and applications of augmented reality, there is little homogenizatio n on the software platforms that support them. An exception may be the low level control software that is provided by some high profile vendors such as Qualcomm and Metaio. However, these provide fine grain modules for e.g. element tracking. We are more co ncerned on the application framework, that includes the control of the devices working together for the development of the AR experience. In this paper we describe the development of a software framework for AR setups. We concentrate on the modular design of the framework, but also on some hard problems such as the calibration stage, crucial for projection – based AR. The developed framework is suitable and has been tested in AR applications using camera – projector pairs, for both fixed and nomadic setups
|
Q. Bao, Marçal Rusiñol, M.Coustaty, Muhammad Muzzamil Luqman, C.D. Tran, & Jean-Marc Ogier. (2016). Delaunay triangulation-based features for Camera-based document image retrieval system. In 12th IAPR Workshop on Document Analysis Systems (pp. 1–6).
Abstract: In this paper, we propose a new feature vector, named DElaunay TRIangulation-based Features (DETRIF), for real-time camera-based document image retrieval. DETRIF is computed based on the geometrical constraints from each pair of adjacency triangles in delaunay triangulation which is constructed from centroids of connected components. Besides, we employ a hashing-based indexing system in order to evaluate the performance of DETRIF and to compare it with other systems such as LLAH and SRIF. The experimentation is carried out on two datasets comprising of 400 heterogeneous-content complex linguistic map images (huge size, 9800 X 11768 pixels resolution)and 700 textual document images.
Keywords: Camera-based Document Image Retrieval; Delaunay Triangulation; Feature descriptors; Indexing
|
Xavier Baro, Sergio Escalera, Isabelle Guyon, Julio C. S. Jacques Junior, Lukasz Romaszko, Lisheng Sun, et al. (2016). Coompetitions in machine learning: case studies. In 30th Annual Conference on Neural Information Processing Systems Worshops.
|
Pejman Rasti, Tonis Uiboupin, Sergio Escalera, & Gholamreza Anbarjafari. (2016). Convolutional Neural Network Super Resolution for Face Recognition in Surveillance Monitoring. In 9th Conference on Articulated Motion and Deformable Objects.
|
Marc Oliu, Ciprian Corneanu, Laszlo A. Jeni, Jeffrey F. Cohn, Takeo Kanade, & Sergio Escalera. (2016). Continuous Supervised Descent Method for Facial Landmark Localisation. In 13th Asian Conference on Computer Vision (Vol. 10112, pp. 121–135). LNCS.
Abstract: Recent methods for facial landmark location perform well on close-to-frontal faces but have problems in generalising to large head rotations. In order to address this issue we propose a second order linear regression method that is both compact and robust against strong rotations. We provide a closed form solution, making the method fast to train. We test the method’s performance on two challenging datasets. The first has been intensely used by the community. The second has been specially generated from a well known 3D face dataset. It is considerably more challenging, including a high diversity of rotations and more samples than any other existing public dataset. The proposed method is compared against state-of-the-art approaches, including RCPR, CGPRT, LBF, CFSS, and GSDM. Results upon both datasets show that the proposed method offers state-of-the-art performance on near frontal view data, improves state-of-the-art methods on more challenging head rotation problems and keeps a compact model size.
|
Carlos David Martinez Hinarejos, Josep Llados, Alicia Fornes, Francisco Casacuberta, Lluis de Las Heras, Joan Mas, et al. (2016). Context, multimodality, and user collaboration in handwritten text processing: the CoMUN-HaT project. In 3rd IberSPEECH.
Abstract: Processing of handwritten documents is a task that is of wide interest for many
purposes, such as those related to preserve cultural heritage. Handwritten text recognition techniques have been successfully applied during the last decade to obtain transcriptions of handwritten documents, and keyword spotting techniques have been applied for searching specific terms in image collections of handwritten documents. However, results on transcription and indexing are far from perfect. In this framework, the use of new data sources arises as a new paradigm that will allow for a better transcription and indexing of handwritten documents. Three main different data sources could be considered: context of the document (style, writer, historical time, topics,. . . ), multimodal data (representations of the document in a different modality, such as the speech signal of the dictation of the text), and user feedback (corrections, amendments,. . . ). The CoMUN-HaT project aims at the integration of these different data sources into the transcription and indexing task for handwritten documents: the use of context derived from the analysis of the documents, how multimodality can aid the recognition process to obtain more accurate transcriptions (including transcription in a modern version of the language), and integration into a userin-the-loop assisted text transcription framework. This will be reflected in the construction of a transcription and indexing platform that can be used by both professional and nonprofessional users, contributing to crowd-sourcing activities to preserve cultural heritage and to obtain an accessible version of the involved corpus.
|
Simone Balocco, Maria Zuluaga, Guillaume Zahnd, Su-Lin Lee, & Stefanie Demirci. (2016). Computing and Visualization for Intravascular Imaging and Computer Assisted Stenting. Elsevier.
|