Juan Ignacio Toledo, Alicia Fornes, Jordi Cucurull, & Josep Llados. (2016). Election Tally Sheets Processing System. In 12th IAPR Workshop on Document Analysis Systems (pp. 364–368).
Abstract: In paper based elections, manual tallies at polling station level produce myriads of documents. These documents share a common form-like structure and a reduced vocabulary worldwide. On the other hand, each tally sheet is filled by a different writer and on different countries, different scripts are used. We present a complete document analysis system for electoral tally sheet processing combining state of the art techniques with a new handwriting recognition subprocess based on unsupervised feature discovery with Variational Autoencoders and sequence classification with BLSTM neural networks. The whole system is designed to be script independent and allows a fast and reliable results consolidation process with reduced operational cost.
|
Albert Berenguel, Oriol Ramos Terrades, Josep Llados, & Cristina Cañero. (2016). Banknote counterfeit detection through background texture printing analysis. In 12th IAPR Workshop on Document Analysis Systems.
Abstract: This paper is focused on the detection of counterfeit photocopy banknotes. The main difficulty is to work on a real industrial scenario without any constraint about the acquisition device and with a single image. The main contributions of this paper are twofold: first the adaptation and performance evaluation of existing approaches to classify the genuine and photocopy banknotes using background texture printing analysis, which have not been applied into this context before. Second, a new dataset of Euro banknotes images acquired with several cameras under different luminance conditions to evaluate these methods. Experiments on the proposed algorithms show that mixing SIFT features and sparse coding dictionaries achieves quasi perfect classification using a linear SVM with the created dataset. Approaches using dictionaries to cover all possible texture variations have demonstrated to be robust and outperform the state-of-the-art methods using the proposed benchmark.
|
Sergio Escalera, Mercedes Torres-Torres, Brais Martinez, Xavier Baro, Hugo Jair Escalante, Isabelle Guyon, et al. (2016). ChaLearn Looking at People and Faces of the World: Face AnalysisWorkshop and Challenge 2016. In 29th IEEE Conference on Computer Vision and Pattern Recognition Workshops.
Abstract: We present the 2016 ChaLearn Looking at People and Faces of the World Challenge and Workshop, which ran three competitions on the common theme of face analysis from still images. The first one, Looking at People, addressed age estimation, while the second and third competitions, Faces of the World, addressed accessory classification and smile and gender classification, respectively. We present two crowd-sourcing methodologies used to collect manual annotations. A custom-build application was used to collect and label data about the apparent age of people (as opposed to the real age). For the Faces of the World data, the citizen-science Zooniverse platform was used. This paper summarizes the three challenges and the data used, as well as the results achieved by the participants of the competitions. Details of the ChaLearn LAP FotW competitions can be found at http://gesture.chalearn.org.
|
Cristhian A. Aguilera-Carrasco, F. Aguilera, Angel Sappa, C. Aguilera, & Ricardo Toledo. (2016). Learning cross-spectral similarity measures with deep convolutional neural networks. In 29th IEEE Conference on Computer Vision and Pattern Recognition Worshops.
Abstract: The simultaneous use of images from different spectracan be helpful to improve the performance of many computer vision tasks. The core idea behind the usage of crossspectral approaches is to take advantage of the strengths of each spectral band providing a richer representation of a scene, which cannot be obtained with just images from one spectral band. In this work we tackle the cross-spectral image similarity problem by using Convolutional Neural Networks (CNNs). We explore three different CNN architectures to compare the similarity of cross-spectral image patches. Specifically, we train each network with images from the visible and the near-infrared spectrum, and then test the result with two public cross-spectral datasets. Experimental results show that CNN approaches outperform the current state-of-art on both cross-spectral datasets. Additionally, our experiments show that some CNN architectures are capable of generalizing between different crossspectral domains.
|
Jun Wan, Yibing Zhao, Shuai Zhou, Isabelle Guyon, & Sergio Escalera. (2016). ChaLearn Looking at People RGB-D Isolated and Continuous Datasets for Gesture Recognition. In 29th IEEE Conference on Computer Vision and Pattern Recognition Worshops.
Abstract: In this paper, we present two large video multi-modal datasets for RGB and RGB-D gesture recognition: the ChaLearn LAP RGB-D Isolated Gesture Dataset (IsoGD)and the Continuous Gesture Dataset (ConGD). Both datasets are derived from the ChaLearn Gesture Dataset
(CGD) that has a total of more than 50000 gestures for the “one-shot-learning” competition. To increase the potential of the old dataset, we designed new well curated datasets composed of 249 gesture labels, and including 47933 gestures manually labeled the begin and end frames in sequences.Using these datasets we will open two competitions
on the CodaLab platform so that researchers can test and compare their methods for “user independent” gesture recognition. The first challenge is designed for gesture spotting
and recognition in continuous sequences of gestures while the second one is designed for gesture classification from segmented data. The baseline method based on the bag of visual words model is also presented.
|
German Ros, Laura Sellart, Joanna Materzynska, David Vazquez, & Antonio Lopez. (2016). The SYNTHIA Dataset: A Large Collection of Synthetic Images for Semantic Segmentation of Urban Scenes. In 29th IEEE Conference on Computer Vision and Pattern Recognition (pp. 3234–3243).
Abstract: Vision-based semantic segmentation in urban scenarios is a key functionality for autonomous driving. The irruption of deep convolutional neural networks (DCNNs) allows to foresee obtaining reliable classifiers to perform such a visual task. However, DCNNs require to learn many parameters from raw images; thus, having a sufficient amount of diversified images with this class annotations is needed. These annotations are obtained by a human cumbersome labour specially challenging for semantic segmentation, since pixel-level annotations are required. In this paper, we propose to use a virtual world for automatically generating realistic synthetic images with pixel-level annotations. Then, we address the question of how useful can be such data for the task of semantic segmentation; in particular, when using a DCNN paradigm. In order to answer this question we have generated a synthetic diversified collection of urban images, named SynthCity, with automatically generated class annotations. We use SynthCity in combination with publicly available real-world urban images with manually provided annotations. Then, we conduct experiments on a DCNN setting that show how the inclusion of SynthCity in the training stage significantly improves the performance of the semantic segmentation task
Keywords: Domain Adaptation; Autonomous Driving; Virtual Data; Semantic Segmentation
|
Jean-Pascal Jacob, Mariella Dimiccoli, & Lionel Moisan. (2016). Active skeleton for bacteria modeling. CMBBE - Computer Methods in Biomechanics and Biomedical Engineering: Imaging and Visualization, 5(4), 274–286.
Abstract: The investigation of spatio-temporal dynamics of bacterial cells and their molecular components requires automated image analysis tools to track cell shape properties and molecular component locations inside the cells. In the study of bacteria aging, the molecular components of interest are protein aggregates accumulated near bacteria boundaries. This particular location makes very ambiguous the correspondence between aggregates and cells, since computing accurately bacteria boundaries in phase-contrast time-lapse imaging is a challenging task. This paper proposes an active skeleton formulation for bacteria modeling which provides several advantages: an easy computation of shape properties (perimeter, length, thickness, orientation), an improved boundary accuracy in noisy images, and a natural bacteria-centered coordinate system that permits the intrinsic location of molecular components inside the cell. Starting from an initial skeleton estimate, the medial axis of the bacterium is obtained by minimizing an energy function which incorporates bacteria shape constraints. Experimental results on biological images and comparative evaluation of the performances validate the proposed approach for modeling cigar-shaped bacteria like Escherichia coli. The Image-J plugin of the proposed method can be found online at this http URL
Keywords: Bacteria modelling; medial axis; active contours; active skeleton; shape contraints
|
C. Butakoff, Simone Balocco, F.M. Sukno, C. Hoogendoorn, C. Tobon-Gomez, G. Avegliano, et al. (2016). Left-ventricular Epi- and Endocardium Extraction from 3D Ultrasound Images Using an Automatically Constructed 3D ASM. CMBBE - Computer Methods in Biomechanics and Biomedical Engineering: Imaging and Visualization, 4(5), 265–280.
Abstract: In this paper, we propose an automatic method for constructing an active shape model (ASM) to segment the complete cardiac left ventricle in 3D ultrasound (3DUS) images, which avoids costly manual landmarking. The automatic construction of the ASM has already been addressed in the literature; however, the direct application of these methods to 3DUS is hampered by a high level of noise and artefacts. Therefore, we propose to construct the ASM by fusing the multidetector computed tomography data, to learn the shape, with the artificially generated 3DUS, in order to learn the neighbourhood of the boundaries. Our artificial images were generated by two approaches: a faster one that does not take into account the geometry of the transducer, and a more comprehensive one, implemented in Field II toolbox. The segmentation accuracy of our ASM was evaluated on 20 patients with left-ventricular asynchrony, demonstrating plausibility of the approach.
Keywords: ASM; cardiac segmentation; statistical model; shape model; 3D ultrasound; cardiac segmentation
|
Gloria Fernandez Esparrach, Jorge Bernal, Maria Lopez Ceron, Henry Cordova, Cristina Sanchez Montes, Cristina Rodriguez de Miguel, et al. (2016). Exploring the clinical potential of an automatic colonic polyp detection method based on the creation of energy maps. END - Endoscopy, 48(9), 837–842.
Abstract: Background and aims: Polyp miss-rate is a drawback of colonoscopy that increases significantly in small polyps. We explored the efficacy of an automatic computer vision method for polyp detection.
Methods: Our method relies on a model that defines polyp boundaries as valleys of image intensity. Valley information is integrated into energy maps which represent the likelihood of polyp presence.
Results: In 24 videos containing polyps from routine colonoscopies, all polyps were detected in at least one frame. Mean values of the maximum of energy map were higher in frames with polyps than without (p<0.001). Performance improved in high quality frames (AUC= 0.79, 95%CI: 0.70-0.87 vs 0.75, 95%CI: 0.66-0.83). Using 3.75 as maximum threshold value, sensitivity and specificity for detection of polyps were 70.4% (95%CI: 60.3-80.8) and 72.4% (95%CI: 61.6-84.6), respectively.
Conclusion: Energy maps showed a good performance for colonic polyp detection. This indicates a potential applicability in clinical practice.
|
Mariella Dimiccoli. (2016). Figure-ground segregation: A fully nonlocal approach. VR - Vision Research, 126, 308–317.
Abstract: We present a computational model that computes and integrates in a nonlocal fashion several configural cues for automatic figure-ground segregation. Our working hypothesis is that the figural status of each pixel is a nonlocal function of several geometric shape properties and it can be estimated without explicitly relying on object boundaries. The methodology is grounded on two elements: multi-directional linear voting and nonlinear diffusion. A first estimation of the figural status of each pixel is obtained as a result of a voting process, in which several differently oriented line-shaped neighborhoods vote to express their belief about the figural status of the pixel. A nonlinear diffusion process is then applied to enforce the coherence of figural status estimates among perceptually homogeneous regions. Computer simulations fit human perception and match the experimental evidence that several cues cooperate in defining figure-ground segregation. The results of this work suggest that figure-ground segregation involves feedback from cells with larger receptive fields in higher visual cortical areas.
Keywords: Figure-ground segregation; Nonlocal approach; Directional linear voting; Nonlinear diffusion
|
Angel Sappa, Cristhian A. Aguilera-Carrasco, Juan A. Carvajal Ayala, Miguel Oliveira, Dennis Romero, Boris X. Vintimilla, et al. (2016). Monocular visual odometry: A cross-spectral image fusion based approach. RAS - Robotics and Autonomous Systems, 85, 26–36.
Abstract: This manuscript evaluates the usage of fused cross-spectral images in a monocular visual odometry approach. Fused images are obtained through a Discrete Wavelet Transform (DWT) scheme, where the best setup is empirically obtained by means of a mutual information based evaluation metric. The objective is to have a flexible scheme where fusion parameters are adapted according to the characteristics of the given images. Visual odometry is computed from the fused monocular images using an off the shelf approach. Experimental results using data sets obtained with two different platforms are presented. Additionally, comparison with a previous approach as well as with monocular-visible/infrared spectra are also provided showing the advantages of the proposed scheme.
Keywords: Monocular visual odometry; LWIR-RGB cross-spectral imaging; Image fusion
|
Miguel Oliveira, Victor Santos, Angel Sappa, P. Dias, & A. Moreira. (2016). Incremental Scenario Representations for Autonomous Driving using Geometric Polygonal Primitives. RAS - Robotics and Autonomous Systems, 83, 312–325.
Abstract: When an autonomous vehicle is traveling through some scenario it receives a continuous stream of sensor data. This sensor data arrives in an asynchronous fashion and often contains overlapping or redundant information. Thus, it is not trivial how a representation of the environment observed by the vehicle can be created and updated over time. This paper presents a novel methodology to compute an incremental 3D representation of a scenario from 3D range measurements. We propose to use macro scale polygonal primitives to model the scenario. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Furthermore, we propose mechanisms designed to update the geometric polygonal primitives over time whenever fresh sensor data is collected. Results show that the approach is capable of producing accurate descriptions of the scene, and that it is computationally very efficient when compared to other reconstruction techniques.
Keywords: Incremental scene reconstruction; Point clouds; Autonomous vehicles; Polygonal primitives
|
Juan Ramon Terven Salinas, Bogdan Raducanu, Maria Elena Meza-de-Luna, & Joaquin Salas. (2016). Head-gestures mirroring detection in dyadic social linteractions with computer vision-based wearable devices. NEUCOM - Neurocomputing, 175(B), 866–876.
Abstract: During face-to-face human interaction, nonverbal communication plays a fundamental role. A relevant aspect that takes part during social interactions is represented by mirroring, in which a person tends to mimic the non-verbal behavior (head and body gestures, vocal prosody, etc.) of the counterpart. In this paper, we introduce a computer vision-based system to detect mirroring in dyadic social interactions with the use of a wearable platform. In our context, mirroring is inferred as simultaneous head noddings displayed by the interlocutors. Our approach consists of the following steps: (1) facial features extraction; (2) facial features stabilization; (3) head nodding recognition; and (4) mirroring detection. Our system achieves a mirroring detection accuracy of 72% on a custom mirroring dataset.
Keywords: Head gestures recognition; Mirroring detection; Dyadic social interaction analysis; Wearable devices
|
Pejman Rasti, Salma Samiei, Mary Agoyi, Sergio Escalera, & Gholamreza Anbarjafari. (2016). Robust non-blind color video watermarking using QR decomposition and entropy analysis. JVCIR - Journal of Visual Communication and Image Representation, 38, 838–847.
Abstract: Issues such as content identification, document and image security, audience measurement, ownership and copyright among others can be settled by the use of digital watermarking. Many recent video watermarking methods show drops in visual quality of the sequences. The present work addresses the aforementioned issue by introducing a robust and imperceptible non-blind color video frame watermarking algorithm. The method divides frames into moving and non-moving parts. The non-moving part of each color channel is processed separately using a block-based watermarking scheme. Blocks with an entropy lower than the average entropy of all blocks are subject to a further process for embedding the watermark image. Finally a watermarked frame is generated by adding moving parts to it. Several signal processing attacks are applied to each watermarked frame in order to perform experiments and are compared with some recent algorithms. Experimental results show that the proposed scheme is imperceptible and robust against common signal processing attacks.
Keywords: Video watermarking; QR decomposition; Discrete Wavelet Transformation; Chirp Z-transform; Singular value decomposition; Orthogonal–triangular decomposition
|
Gerard Canal, Sergio Escalera, & Cecilio Angulo. (2016). A Real-time Human-Robot Interaction system based on gestures for assistive scenarios. CVIU - Computer Vision and Image Understanding, 149, 65–77.
Abstract: Natural and intuitive human interaction with robotic systems is a key point to develop robots assisting people in an easy and effective way. In this paper, a Human Robot Interaction (HRI) system able to recognize gestures usually employed in human non-verbal communication is introduced, and an in-depth study of its usability is performed. The system deals with dynamic gestures such as waving or nodding which are recognized using a Dynamic Time Warping approach based on gesture specific features computed from depth maps. A static gesture consisting in pointing at an object is also recognized. The pointed location is then estimated in order to detect candidate objects the user may refer to. When the pointed object is unclear for the robot, a disambiguation procedure by means of either a verbal or gestural dialogue is performed. This skill would lead to the robot picking an object in behalf of the user, which could present difficulties to do it by itself. The overall system — which is composed by a NAO and Wifibot robots, a KinectTM v2 sensor and two laptops — is firstly evaluated in a structured lab setup. Then, a broad set of user tests has been completed, which allows to assess correct performance in terms of recognition rates, easiness of use and response times.
Keywords: Gesture recognition; Human Robot Interaction; Dynamic Time Warping; Pointing location estimation
|