|
Albert Berenguel, Oriol Ramos Terrades, Josep Llados, & Cristina Cañero. (2016). Banknote counterfeit detection through background texture printing analysis. In 12th IAPR Workshop on Document Analysis Systems.
Abstract: This paper is focused on the detection of counterfeit photocopy banknotes. The main difficulty is to work on a real industrial scenario without any constraint about the acquisition device and with a single image. The main contributions of this paper are twofold: first the adaptation and performance evaluation of existing approaches to classify the genuine and photocopy banknotes using background texture printing analysis, which have not been applied into this context before. Second, a new dataset of Euro banknotes images acquired with several cameras under different luminance conditions to evaluate these methods. Experiments on the proposed algorithms show that mixing SIFT features and sparse coding dictionaries achieves quasi perfect classification using a linear SVM with the created dataset. Approaches using dictionaries to cover all possible texture variations have demonstrated to be robust and outperform the state-of-the-art methods using the proposed benchmark.
|
|
|
Lluis Gomez, & Dimosthenis Karatzas. (2017). TextProposals: a Text‐specific Selective Search Algorithm for Word Spotting in the Wild. PR - Pattern Recognition, 70, 60–74.
Abstract: Motivated by the success of powerful while expensive techniques to recognize words in a holistic way (Goel et al., 2013; Almazán et al., 2014; Jaderberg et al., 2016) object proposals techniques emerge as an alternative to the traditional text detectors. In this paper we introduce a novel object proposals method that is specifically designed for text. We rely on a similarity based region grouping algorithm that generates a hierarchy of word hypotheses. Over the nodes of this hierarchy it is possible to apply a holistic word recognition method in an efficient way.
Our experiments demonstrate that the presented method is superior in its ability of producing good quality word proposals when compared with class-independent algorithms. We show impressive recall rates with a few thousand proposals in different standard benchmarks, including focused or incidental text datasets, and multi-language scenarios. Moreover, the combination of our object proposals with existing whole-word recognizers (Almazán et al., 2014; Jaderberg et al., 2016) shows competitive performance in end-to-end word spotting, and, in some benchmarks, outperforms previously published results. Concretely, in the challenging ICDAR2015 Incidental Text dataset, we overcome in more than 10% F-score the best-performing method in the last ICDAR Robust Reading Competition (Karatzas, 2015). Source code of the complete end-to-end system is available at https://github.com/lluisgomez/TextProposals.
|
|
|
Lluis Gomez, Anguelos Nicolaou, & Dimosthenis Karatzas. (2017). Improving patch‐based scene text script identification with ensembles of conjoined networks. PR - Pattern Recognition, 67, 85–96.
|
|
|
Lluis Gomez, Y. Patel, Marçal Rusiñol, C.V. Jawahar, & Dimosthenis Karatzas. (2017). Self‐supervised learning of visual features through embedding images into text topic spaces. In 30th IEEE Conference on Computer Vision and Pattern Recognition.
Abstract: End-to-end training from scratch of current deep architectures for new computer vision problems would require Imagenet-scale datasets, and this is not always possible. In this paper we present a method that is able to take advantage of freely available multi-modal content to train computer vision algorithms without human supervision. We put forward the idea of performing self-supervised learning of visual features by mining a large scale corpus of multi-modal (text and image) documents. We show that discriminative visual features can be learnt efficiently by training a CNN to predict the semantic context in which a particular image is more probable to appear as an illustration. For this we leverage the hidden semantic structures discovered in the text corpus with a well-known topic modeling technique. Our experiments demonstrate state of the art performance in image classification, object detection, and multi-modal retrieval compared to recent self-supervised or natural-supervised approaches.
|
|
|
Marc Sunset Perez, Marc Comino Trinidad, Dimosthenis Karatzas, Antonio Chica Calaf, & Pere Pau Vazquez Alcocer. (2016). Development of general‐purpose projection‐based augmented reality systems. IADIs - IADIs international journal on computer science and information systems, 1–18.
Abstract: Despite the large amount of methods and applications of augmented reality, there is little homogenizatio n on the software platforms that support them. An exception may be the low level control software that is provided by some high profile vendors such as Qualcomm and Metaio. However, these provide fine grain modules for e.g. element tracking. We are more co ncerned on the application framework, that includes the control of the devices working together for the development of the AR experience. In this paper we describe the development of a software framework for AR setups. We concentrate on the modular design of the framework, but also on some hard problems such as the calibration stage, crucial for projection – based AR. The developed framework is suitable and has been tested in AR applications using camera – projector pairs, for both fixed and nomadic setups
|
|
|
Ivet Rafegas, Javier Vazquez, Robert Benavente, Maria Vanrell, & Susana Alvarez. (2017). Enhancing spatio-chromatic representation with more-than-three color coding for image description. JOSA A - Journal of the Optical Society of America A, 34(5), 827–837.
Abstract: Extraction of spatio-chromatic features from color images is usually performed independently on each color channel. Usual 3D color spaces, such as RGB, present a high inter-channel correlation for natural images. This correlation can be reduced using color-opponent representations, but the spatial structure of regions with small color differences is not fully captured in two generic Red-Green and Blue-Yellow channels. To overcome these problems, we propose a new color coding that is adapted to the specific content of each image. Our proposal is based on two steps: (a) setting the number of channels to the number of distinctive colors we find in each image (avoiding the problem of channel correlation), and (b) building a channel representation that maximizes contrast differences within each color channel (avoiding the problem of low local contrast). We call this approach more-than-three color coding (MTT) to enhance the fact that the number of channels is adapted to the image content. The higher color complexity an image has, the more channels can be used to represent it. Here we select distinctive colors as the most predominant in the image, which we call color pivots, and we build the new color coding using these color pivots as a basis. To evaluate the proposed approach we measure its efficiency in an image categorization task. We show how a generic descriptor improves its performance at the description level when applied on the MTT coding.
|
|
|
Ivet Rafegas, & Maria Vanrell. (2016). Color spaces emerging from deep convolutional networks. In 24th Color and Imaging Conference (pp. 225–230).
Abstract: Award for the best interactive session
Defining color spaces that provide a good encoding of spatio-chromatic properties of color surfaces is an open problem in color science [8, 22]. Related to this, in computer vision the fusion of color with local image features has been studied and evaluated [16]. In human vision research, the cells which are selective to specific color hues along the visual pathway are also a focus of attention [7, 14]. In line with these research aims, in this paper we study how color is encoded in a deep Convolutional Neural Network (CNN) that has been trained on more than one million natural images for object recognition. These convolutional nets achieve impressive performance in computer vision, and rival the representations in human brain. In this paper we explore how color is represented in a CNN architecture that can give some intuition about efficient spatio-chromatic representations. In convolutional layers the activation of a neuron is related to a spatial filter, that combines spatio-chromatic representations. We use an inverted version of it to explore the properties. Using a series of unsupervised methods we classify different type of neurons depending on the color axes they define and we propose an index of color-selectivity of a neuron. We estimate the main color axes that emerge from this trained net and we prove that colorselectivity of neurons decreases from early to deeper layers.
|
|
|
Ivet Rafegas, & Maria Vanrell. (2016). Colour Visual Coding in trained Deep Neural Networks. In European Conference on Visual Perception.
|
|
|
Marçal Rusiñol, & Josep Llados. (2017). Flowchart Recognition in Patent Information Retrieval. In M. Lupu, K. Mayer, N. Kando, & A.J. Trippe (Eds.), Current Challenges in Patent Information Retrieval (Vol. 37, pp. 351–368). Springer Berlin Heidelberg.
|
|
|
Alicia Fornes, Josep Llados, Oriol Ramos Terrades, & Marçal Rusiñol. (2016). La Visió per Computador com a Eina per a la Interpretació Automàtica de Fonts Documentals. Lligall, Revista Catalana d'Arxivística, 20–46.
|
|
|
Victor Vaquero, German Ros, Francesc Moreno-Noguer, Antonio Lopez, & Alberto Sanfeliu. (2017). Joint coarse-and-fine reasoning for deep optical flow. In 24th International Conference on Image Processing (pp. 2558–2562).
Abstract: We propose a novel representation for dense pixel-wise estimation tasks using CNNs that boosts accuracy and reduces training time, by explicitly exploiting joint coarse-and-fine reasoning. The coarse reasoning is performed over a discrete classification space to obtain a general rough solution, while the fine details of the solution are obtained over a continuous regression space. In our approach both components are jointly estimated, which proved to be beneficial for improving estimation accuracy. Additionally, we propose a new network architecture, which combines coarse and fine components by treating the fine estimation as a refinement built on top of the coarse solution, and therefore adding details to the general prediction. We apply our approach to the challenging problem of optical flow estimation and empirically validate it against state-of-the-art CNN-based solutions trained from scratch and tested on large optical flow datasets.
|
|
|
Arash Akbarinia, & C. Alejandro Parraga. (2016). Dynamically Adjusted Surround Contrast Enhances Boundary Detection, European Conference on Visual Perception. In European Conference on Visual Perception.
|
|
|
C. Alejandro Parraga, & Arash Akbarinia. (2016). Colour Constancy as a Product of Dynamic Centre-Surround Adaptation. In 16th Annual meeting in Vision Sciences Society (Vol. 16).
Abstract: Colour constancy refers to the human visual system's ability to preserve the perceived colour of objects despite changes in the illumination. Its exact mechanisms are unknown, although a number of systems ranging from retinal to cortical and memory are thought to play important roles. The strength of the perceptual shift necessary to preserve these colours is usually estimated by the vectorial distances from an ideal match (or canonical illuminant). In this work we explore how much of the colour constancy phenomenon could be explained by well-known physiological properties of V1 and V2 neurons whose receptive fields (RF) vary according to the contrast and orientation of surround stimuli. Indeed, it has been shown that both RF size and the normalization occurring between centre and surround in cortical neurons depend on the local properties of surrounding stimuli. Our stating point is the construction of a computational model which includes this dynamical centre-surround adaptation by means of two overlapping asymmetric Gaussian kernels whose variances are adjusted to the contrast of surrounding pixels to represent the changes in RF size of cortical neurons and the weights of their respective contributions are altered according to differences in centre-surround contrast and orientation. The final output of the model is obtained after convolving an image with this dynamical operator and an estimation of the illuminant is obtained by considering the contrast of the far surround. We tested our algorithm on naturalistic stimuli from several benchmark datasets. Our results show that although our model does not require any training, its performance against the state-of-the-art is highly competitive, even outperforming learning-based algorithms in some cases. Indeed, these results are very encouraging if we consider that they were obtained with the same parameters for all datasets (i.e. just like the human visual system operates).
|
|
|
Marco Bellantonio, Mohammad A. Haque, Pau Rodriguez, Kamal Nasrollahi, Taisi Telve, Sergio Escalera, et al. (2016). Spatio-Temporal Pain Recognition in CNN-based Super-Resolved Facial Images. In 23rd International Conference on Pattern Recognition (Vol. 10165). LNCS.
Abstract: Automatic pain detection is a long expected solution to a prevalent medical problem of pain management. This is more relevant when the subject of pain is young children or patients with limited ability to communicate about their pain experience. Computer vision-based analysis of facial pain expression provides a way of efficient pain detection. When deep machine learning methods came into the scene, automatic pain detection exhibited even better performance. In this paper, we figured out three important factors to exploit in automatic pain detection: spatial information available regarding to pain in each of the facial video frames, temporal axis information regarding to pain expression pattern in a subject video sequence, and variation of face resolution. We employed a combination of convolutional neural network and recurrent neural network to setup a deep hybrid pain detection framework that is able to exploit both spatial and temporal pain information from facial video. In order to analyze the effect of different facial resolutions, we introduce a super-resolution algorithm to generate facial video frames with different resolution setups. We investigated the performance on the publicly available UNBC-McMaster Shoulder Pain database. As a contribution, the paper provides novel and important information regarding to the performance of a hybrid deep learning framework for pain detection in facial images of different resolution.
|
|
|
Arnau Baro, Pau Riba, & Alicia Fornes. (2016). Towards the recognition of compound music notes in handwritten music scores. In 15th international conference on Frontiers in Handwriting Recognition.
Abstract: The recognition of handwritten music scores still remains an open problem. The existing approaches can only deal with very simple handwritten scores mainly because of the variability in the handwriting style and the variability in the composition of groups of music notes (i.e. compound music notes). In this work we focus on this second problem and propose a method based on perceptual grouping for the recognition of compound music notes. Our method has been tested using several handwritten music scores of the CVC-MUSCIMA database and compared with a commercial Optical Music Recognition (OMR) software. Given that our method is learning-free, the obtained results are promising.
|
|