Kamal Nasrollahi, Sergio Escalera, P. Rasti, Gholamreza Anbarjafari, Xavier Baro, Hugo Jair Escalante, et al. (2015). Deep Learning based Super-Resolution for Improved Action Recognition. In 5th International Conference on Image Processing Theory, Tools and Applications IPTA2015 (pp. 67–72).
Abstract: Action recognition systems mostly work with videos of proper quality and resolution. Even most challenging benchmark databases for action recognition, hardly include videos of low-resolution from, e.g., surveillance cameras. In videos recorded by such cameras, due to the distance between people and cameras, people are pictured very small and hence challenge action recognition algorithms. Simple upsampling methods, like bicubic interpolation, cannot retrieve all the detailed information that can help the recognition. To deal with this problem, in this paper we combine results of bicubic interpolation with results of a state-ofthe-art deep learning-based super-resolution algorithm, through an alpha-blending approach. The experimental results obtained on down-sampled version of a large subset of Hoolywood2 benchmark database show the importance of the proposed system in increasing the recognition rate of a state-of-the-art action recognition system for handling low-resolution videos.
|
Ishaan Gulrajani, Kundan Kumar, Faruk Ahmed, Adrien Ali Taiga, Francesco Visin, David Vazquez, et al. (2017). PixelVAE: A Latent Variable Model for Natural Images. In 5th International Conference on Learning Representations.
Abstract: Natural image modeling is a landmark challenge of unsupervised learning. Variational Autoencoders (VAEs) learn a useful latent representation and generate samples that preserve global structure but tend to suffer from image blurriness. PixelCNNs model sharp contours and details very well, but lack an explicit latent representation and have difficulty modeling large-scale structure in a computationally efficient way. In this paper, we present PixelVAE, a VAE model with an autoregressive decoder based on PixelCNN. The resulting architecture achieves state-of-the-art log-likelihood on binarized MNIST. We extend PixelVAE to a hierarchy of multiple latent variables at different scales; this hierarchical model achieves competitive likelihood on 64x64 ImageNet and generates high-quality samples on LSUN bedrooms.
Keywords: Deep Learning; Unsupervised Learning
|
Pau Rodriguez, Jordi Gonzalez, Jordi Cucurull, Josep M. Gonfaus, & Xavier Roca. (2017). Regularizing CNNs with Locally Constrained Decorrelations. In 5th International Conference on Learning Representations.
|
David Aldavert, Ricardo Toledo, Arnau Ramisa, & Ramon Lopez de Mantaras. (2009). Efficient Object Pixel-Level Categorization using Bag of Features: Advances in Visual Computing. In 5th International Symposium on Visual Computing (Vol. 5875, 44–55). Springer Berlin Heidelberg.
Abstract: In this paper we present a pixel-level object categorization method suitable to be applied under real-time constraints. Since pixels are categorized using a bag of features scheme, the major bottleneck of such an approach would be the feature pooling in local histograms of visual words. Therefore, we propose to bypass this time-consuming step and directly obtain the score from a linear Support Vector Machine classifier. This is achieved by creating an integral image of the components of the SVM which can readily obtain the classification score for any image sub-window with only 10 additions and 2 products, regardless of its size. Besides, we evaluated the performance of two efficient feature quantization methods: the Hierarchical K-Means and the Extremely Randomized Forest. All experiments have been done in the Graz02 database, showing comparable, or even better results to related work with a lower computational cost.
|
Bogdan Raducanu, & Fadi Dornaika. (2009). Natural Facial Expression Recognition Using Dynamic and Static Schemes. In 5th International Symposium on Visual Computing (Vol. 5875, 730–739). LNCS. Springer Berlin Heidelberg.
Abstract: Affective computing is at the core of a new paradigm in HCI and AI represented by human-centered computing. Within this paradigm, it is expected that machines will be enabled with perceiving capabilities, making them aware about users’ affective state. The current paper addresses the problem of facial expression recognition from monocular videos sequences. We propose a dynamic facial expression recognition scheme, which is proven to be very efficient. Furthermore, it is conveniently compared with several static-based systems adopting different magnitude of facial expression. We provide evaluations of performance using Linear Discriminant Analysis (LDA), Non parametric Discriminant Analysis (NDA), and Support Vector Machines (SVM). We also provide performance evaluations using arbitrary test video sequences.
|
Veronica Romero, Emilio Granell, Alicia Fornes, Enrique Vidal, & Joan Andreu Sanchez. (2019). Information Extraction in Handwritten Marriage Licenses Books. In 5th International Workshop on Historical Document Imaging and Processing (pp. 66–71).
Abstract: Handwritten marriage licenses books are characterized by a simple structure of the text in the records with an evolutionary vocabulary, mainly composed of proper names that change along the time. This distinct vocabulary makes automatic transcription and semantic information extraction difficult tasks. Previous works have shown that the use of category-based language models and a Grammatical Inference technique known as MGGI can improve the accuracy of these
tasks. However, the application of the MGGI algorithm requires an a priori knowledge to label the words of the training strings, that is not always easy to obtain. In this paper we study how to automatically obtain the information required by the MGGI algorithm using a technique based on Confusion Networks. Using the resulting language model, full handwritten text recognition and information extraction experiments have been carried out with results supporting the proposed approach.
|
Marc Bolaños, Maite Garolera, & Petia Radeva. (2013). Active labeling application applied to food-related object recognition. In 5th International Workshop on Multimedia for Cooking & Eating Activities (pp. 45–50).
Abstract: Every day, lifelogging devices, available for recording different aspects of our daily life, increase in number, quality and functions, just like the multiple applications that we give to them. Applying wearable devices to analyse the nutritional habits of people is a challenging application based on acquiring and analyzing life records in long periods of time. However, to extract the information of interest related to the eating patterns of people, we need automatic methods to process large amount of life-logging data (e.g. recognition of food-related objects). Creating a rich set of manually labeled samples to train the algorithms is slow, tedious and subjective. To address this problem, we propose a novel method in the framework of Active Labeling for construct- ing a training set of thousands of images. Inspired by the hierarchical sampling method for active learning [6], we propose an Active forest that organizes hierarchically the data for easy and fast labeling. Moreover, introducing a classifier into the hierarchical structures, as well as transforming the feature space for better data clustering, additionally im- prove the algorithm. Our method is successfully tested to label 89.700 food-related objects and achieves significant reduction in expert time labelling.
Active labeling application applied to food-related object recognition ResearchGate. Available from: http://www.researchgate.net/publication/262252017Activelabelingapplicationappliedtofood-relatedobjectrecognition [accessed Jul 14, 2015].
|
Jaume Amores, N. Sebe, Petia Radeva, Theo Gevers, & A. Smeulders. (2004). Boosting Contextual Information in Content-based Image Retrieval.
|
Jaume Amores, & Petia Radeva. (2003). Elastic Matching and Retrieval of IVUS Images Using Contextual Information.
|
David Masip, & Jordi Vitria. (2003). On the Nearest Neighbor Approach for Gender Recognition.
|
Joan Arnedo-Moreno, & Agata Lapedriza. (2010). Visualizing key authenticity: turning your face into your public key. In 6th China International Conference on Information Security and Cryptology (pp. 605–618). LNCS.
Abstract: Biometric information has become a technology complementary to cryptography, allowing to conveniently manage cryptographic data. Two important needs are ful lled: rst of all, making such data always readily available, and additionally, making its legitimate owner easily identi able. In this work we propose a signature system which integrates face recognition biometrics with and identity-based signature scheme, so the user's face e ectively becomes his public key and system ID. Thus, other users may verify messages using photos of the claimed sender, providing a reasonable trade-o between system security and usability, as well as a much more straightforward public key authenticity and distribution process.
|
Miguel Reyes, Jose Ramirez Moreno, Juan R Revilla, Petia Radeva, & Sergio Escalera. (2011). ADiBAS: Sistema Multisensor de Adquisicion Automatica de Datos Corporales Objetivos, Robustos y Fiables para el Analisis de la Postura y el Movimiento. In 6th Congreso Iberoamericano de Tecnologia de Apoyo a la Discapacidad (pp. 939–944).
Abstract: El análisis de la postura y del rango de movimiento son fundamentales para conocer la optimización del gesto y mejorar, de este modo, el rendimiento y la detección de posibles lesiones. Esta cuantificación es especialmente interesante en deportistas o en pacientes que presentan alguna lesión neurológica o del sistema musculo-esquelético, ya que permite conocer el proceso evolutivo de estos pacientes, evaluar la eficacia de la terapia aplicada y proponer, en caso necesario, una modificación del protocolo de tratamiento.
En este trabajo presentamos un sistema automático que permite, mediante una tecnología no invasiva, la captación automática de marcadores LED situados sobre el paciente y su posterior análisis con el fin de mostrar al especialista datos objetivos que permitan un mejor soporte diagnóstico. También se describe un
sistema analítico de la postura corporal sin marcadores, donde su ejecución durante secuencias dinámicas aporta un alto grado de naturalidad al paciente a la hora de realizar los ejercicios funcionales.
|
Jordi Roca, Maria Vanrell, & C. Alejandro Parraga. (2012). What is constant in colour constancy? In 6th European Conference on Colour in Graphics, Imaging and Vision (pp. 337–343).
Abstract: Color constancy refers to the ability of the human visual system to stabilize
the color appearance of surfaces under an illuminant change. In this work we studied how the interrelations among nine colors are perceived under illuminant changes, particularly whether they remain stable across 10 different conditions (5 illuminants and 2 backgrounds). To do so we have used a paradigm that measures several colors under an immersive state of adaptation. From our measures we defined a perceptual structure descriptor that is up to 87% stable over all conditions, suggesting that color category features could be used to predict color constancy. This is in agreement with previous results on the stability of border categories [1,2] and with computational color constancy
algorithms [3] for estimating the scene illuminant.
|
J. Chazalon, Marçal Rusiñol, & Jean-Marc Ogier. (2015). Improving Document Matching Performance by Local Descriptor Filtering. In 6th IAPR International Workshop on Camera Based Document Analysis and Recognition CBDAR2015 (pp. 1216–1220).
Abstract: In this paper we propose an effective method aimed at reducing the amount of local descriptors to be indexed in a document matching framework. In an off-line training stage, the matching between the model document and incoming images is computed retaining the local descriptors from the model that steadily produce good matches. We have evaluated this approach by using the ICDAR2015 SmartDOC dataset containing near 25 000 images from documents to be captured by a mobile device. We have tested the performance of this filtering step by using
ORB and SIFT local detectors and descriptors. The results show an important gain both in quality of the final matching as well as in time and space requirements.
|
Suman Ghosh, Lluis Gomez, Dimosthenis Karatzas, & Ernest Valveny. (2015). Efficient indexing for Query By String text retrieval. In 6th IAPR International Workshop on Camera Based Document Analysis and Recognition CBDAR2015 (pp. 1236–1240).
Abstract: This paper deals with Query By String word spotting in scene images. A hierarchical text segmentation algorithm based on text specific selective search is used to find text regions. These regions are indexed per character n-grams present in the text region. An attribute representation based on Pyramidal Histogram of Characters (PHOC) is used to compare text regions with the query text. For generation of the index a similar attribute space based Pyramidal Histogram of character n-grams is used. These attribute models are learned using linear SVMs over the Fisher Vector [1] representation of the images along with the PHOC labels of the corresponding strings.
|