Diego Cheda, Daniel Ponsa, & Antonio Lopez. (2012). Monocular Depth-based Background Estimation. In 7th International Conference on Computer Vision Theory and Applications (pp. 323–328).
Abstract: In this paper, we address the problem of reconstructing the background of a scene from a video sequence with occluding objects. The images are taken by hand-held cameras. Our method composes the background by selecting the appropriate pixels from previously aligned input images. To do that, we minimize a cost function that penalizes the deviations from the following assumptions: background represents objects whose distance to the camera is maximal, and background objects are stationary. Distance information is roughly obtained by a supervised learning approach that allows us to distinguish between close and distant image regions. Moving foreground objects are filtered out by using stationariness and motion boundary constancy measurements. The cost function is minimized by a graph cuts method. We demonstrate the applicability of our approach to recover an occlusion-free background in a set of sequences.
|
Dennis G.Romero, Anselmo Frizera, Angel Sappa, Boris X. Vintimilla, & Teodiano F.Bastos. (2015). A predictive model for human activity recognition by observing actions and context. In Advanced Concepts for Intelligent Vision Systems, Proceedings of 16th International Conference, ACIVS 2015 (Vol. 9386, pp. 323–333). LNCS. Springer International Publishing.
Abstract: This paper presents a novel model to estimate human activities — a human activity is defined by a set of human actions. The proposed approach is based on the usage of Recurrent Neural Networks (RNN) and Bayesian inference through the continuous monitoring of human actions and its surrounding environment. In the current work human activities are inferred considering not only visual analysis but also additional resources; external sources of information, such as context information, are incorporated to contribute to the activity estimation. The novelty of the proposed approach lies in the way the information is encoded, so that it can be later associated according to a predefined semantic structure. Hence, a pattern representing a given activity can be defined by a set of actions, plus contextual information or other kind of information that could be relevant to describe the activity. Experimental results with real data are provided showing the validity of the proposed approach.
|
Francisco Javier Orozco, Ognjen Rudovic, Jordi Gonzalez, & Maja Pantic. (2013). Hierarchical On-line Appearance-Based Tracking for 3D Head Pose, Eyebrows, Lips, Eyelids and Irises. IMAVIS - Image and Vision Computing, 31(4), 322–340.
Abstract: In this paper, we propose an On-line Appearance-Based Tracker (OABT) for simultaneous tracking of 3D head pose, lips, eyebrows, eyelids and irises in monocular video sequences. In contrast to previously proposed tracking approaches, which deal with face and gaze tracking separately, our OABT can also be used for eyelid and iris tracking, as well as 3D head pose, lips and eyebrows facial actions tracking. Furthermore, our approach applies an on-line learning of changes in the appearance of the tracked target. Hence, the prior training of appearance models, which usually requires a large amount of labeled facial images, is avoided. Moreover, the proposed method is built upon a hierarchical combination of three OABTs, which are optimized using a Levenberg–Marquardt Algorithm (LMA) enhanced with line-search procedures. This, in turn, makes the proposed method robust to changes in lighting conditions, occlusions and translucent textures, as evidenced by our experiments. Finally, the proposed method achieves head and facial actions tracking in real-time.
Keywords: On-line appearance models; Levenberg–Marquardt algorithm; Line-search optimization; 3D face tracking; Facial action tracking; Eyelid tracking; Iris tracking
|
Ernest Valveny, Philippe Dosch, & Alicia Fornes. (2008). Report on the Third Contest on Symbol Recognition. In J.M. Ogier J. L. W. Liu (Ed.), Graphics Recognition: Recent Advances and New Opportunities (Vol. 5046, 321–328). LNCS.
|
Marçal Rusiñol, Josep Llados, & Gemma Sanchez. (2010). Symbol Spotting in Vectorized Technical Drawings Through a Lookup Table of Region Strings. PAA - Pattern Analysis and Applications, 13(3), 321–331.
Abstract: In this paper, we address the problem of symbol spotting in technical document images applied to scanned and vectorized line drawings. Like any information spotting architecture, our approach has two components. First, symbols are decomposed in primitives which are compactly represented and second a primitive indexing structure aims to efficiently retrieve similar primitives. Primitives are encoded in terms of attributed strings representing closed regions. Similar strings are clustered in a lookup table so that the set median strings act as indexing keys. A voting scheme formulates hypothesis in certain locations of the line drawing image where there is a high presence of regions similar to the queried ones, and therefore, a high probability to find the queried graphical symbol. The proposed approach is illustrated in a framework consisting in spotting furniture symbols in architectural drawings. It has been proved to work even in the presence of noise and distortion introduced by the scanning and raster-to-vector processes.
|
David Aldavert, Arnau Ramisa, Ramon Lopez de Mantaras, & Ricardo Toledo. (2010). Real-time Object Segmentation using a Bag of Features Approach. In J.Aguilar. A. M. In R.Alquezar (Ed.), 13th International Conference of the Catalan Association for Artificial Intelligence (Vol. 220, 321–329). IOS Press Amsterdam,.
Abstract: In this paper, we propose an object segmentation framework, based on the popular bag of features (BoF), which can process several images per second while achieving a good segmentation accuracy assigning an object category to every pixel of the image. We propose an efficient color descriptor to complement the information obtained by a typical gradient-based local descriptor. Results show that color proves to be a useful cue to increase the segmentation accuracy, specially in large homogeneous regions. Then, we extend the Hierarchical K-Means codebook using the recently proposed Vector of Locally Aggregated Descriptors method. Finally, we show that the BoF method can be easily parallelized since it is applied locally, thus the time necessary to process an image is further reduced. The performance of the proposed method is evaluated in the standard PASCAL 2007 Segmentation Challenge object segmentation dataset.
Keywords: Object Segmentation; Bag Of Features; Feature Quantization; Densely sampled descriptors
|
Fernando Barrera, Felipe Lumbreras, & Angel Sappa. (2012). Evaluation of Similarity Functions in Multimodal Stereo. In 9th International Conference on Image Analysis and Recognition (Vol. 7324, pp. 320–329). LNCS. Springer Berlin Heidelberg.
Abstract: This paper presents an evaluation framework for multimodal stereo matching, which allows to compare the performance of four similarity functions. Additionally, it presents details of a multimodal stereo head that supply thermal infrared and color images, as well as, aspects of its calibration and rectification. The pipeline includes a novel method for the disparity selection, which is suitable for evaluating the similarity functions. Finally, a benchmark for comparing different initializations of the proposed framework is presented. Similarity functions are based on mutual information, gradient orientation and scale space representations. Their evaluation is performed using two metrics: i) disparity error, and ii) number of correct matches on planar regions. In addition to the proposed evaluation, the current paper also shows that 3D sparse representations can be recovered from such a multimodal stereo head.
Keywords: Aveiro, Portugal
|
Michal Drozdzal, Santiago Segui, Petia Radeva, Carolina Malagelada, Fernando Azpiroz, & Jordi Vitria. (2015). Motility bar: a new tool for motility analysis of endoluminal videos. CBM - Computers in Biology and Medicine, 65, 320–330.
Abstract: Wireless Capsule Endoscopy (WCE) provides a new perspective of the small intestine, since it enables, for the first time, visualization of the entire organ. However, the long visual video analysis time, due to the large number of data in a single WCE study, was an important factor impeding the widespread use of the capsule as a tool for intestinal abnormalities detection. Therefore, the introduction of WCE triggered a new field for the application of computational methods, and in particular, of computer vision. In this paper, we follow the computational approach and come up with a new perspective on the small intestine motility problem. Our approach consists of three steps: first, we review a tool for the visualization of the motility information contained in WCE video; second, we propose algorithms for the characterization of two motility building-blocks: contraction detector and lumen size estimation; finally, we introduce an approach to detect segments of stable motility behavior. Our claims are supported by an evaluation performed with 10 WCE videos, suggesting that our methods ably capture the intestinal motility information.
Keywords: Small intestine; Motility; WCE; Computer vision; Image classification
|
Marta Teres, & Eduard Vazquez. (2010). Museums, spaces and museographical resources. Current state and proposals for a multidisciplinary framework to open new perspectives. In Proceedings of The CREATE 2010 Conference (319–323).
Abstract: Two of the main aims of a museum are to communicate its heritage and to make enjoy its visitors. This communication can be done through the pieces itself and the museographical resources but also through the building, the interior design, the light and the colour. Art museums, in opposition with other museums, lack on the application of these additional resources. Such a work necessarily requires a multidisciplinary point of view for a holistic vision of all what a museum implies and to use all its potential as a tool of knowledge and culture for all the visitors.
|
Ivet Rafegas, Maria Vanrell, Luis A Alexandre, & G. Arias. (2020). Understanding trained CNNs by indexing neuron selectivity. PRL - Pattern Recognition Letters, 136, 318–325.
Abstract: The impressive performance of Convolutional Neural Networks (CNNs) when solving different vision problems is shadowed by their black-box nature and our consequent lack of understanding of the representations they build and how these representations are organized. To help understanding these issues, we propose to describe the activity of individual neurons by their Neuron Feature visualization and quantify their inherent selectivity with two specific properties. We explore selectivity indexes for: an image feature (color); and an image label (class membership). Our contribution is a framework to seek or classify neurons by indexing on these selectivity properties. It helps to find color selective neurons, such as a red-mushroom neuron in layer Conv4 or class selective neurons such as dog-face neurons in layer Conv5 in VGG-M, and establishes a methodology to derive other selectivity properties. Indexing on neuron selectivity can statistically draw how features and classes are represented through layers in a moment when the size of trained nets is growing and automatic tools to index neurons can be helpful.
|
Kaida Xiao, Sophie Wuerger, Chenyang Fu, & Dimosthenis Karatzas. (2011). Unique Hue Data for Colour Appearance Models. Part i: Loci of Unique Hues and Hue Uniformity. CRA - Color Research & Application, 36(5), 316–323.
Abstract: Psychophysical experiments were conducted to assess unique hues on a CRT display for a large sample of colour-normal observers (n 1⁄4 185). These data were then used to evaluate the most commonly used colour appear- ance model, CIECAM02, by transforming the CIEXYZ tris- timulus values of the unique hues to the CIECAM02 colour appearance attributes, lightness, chroma and hue angle. We report two findings: (1) the hue angles derived from our unique hue data are inconsistent with the commonly used Natural Color System hues that are incorporated in the CIECAM02 model. We argue that our predicted unique hue angles (derived from our large dataset) provide a more reliable standard for colour management applications when the precise specification of these salient colours is im- portant. (2) We test hue uniformity for CIECAM02 in all four unique hues and show significant disagreements for all hues, except for unique red which seems to be invariant under lightness changes. Our dataset is useful to improve the CIECAM02 model as it provides reliable data for benchmarking.
Keywords: unique hues; colour appearance models; CIECAM02; hue uniformity
|
Yagmur Gucluturk, Umut Guclu, Xavier Baro, Hugo Jair Escalante, Isabelle Guyon, Sergio Escalera, et al. (2018). Multimodal First Impression Analysis with Deep Residual Networks. TAC - IEEE Transactions on Affective Computing, 8(3), 316–329.
Abstract: People form first impressions about the personalities of unfamiliar individuals even after very brief interactions with them. In this study we present and evaluate several models that mimic this automatic social behavior. Specifically, we present several models trained on a large dataset of short YouTube video blog posts for predicting apparent Big Five personality traits of people and whether they seem suitable to be recommended to a job interview. Along with presenting our audiovisual approach and results that won the third place in the ChaLearn First Impressions Challenge, we investigate modeling in different modalities including audio only, visual only, language only, audiovisual, and combination of audiovisual and language. Our results demonstrate that the best performance could be obtained using a fusion of all data modalities. Finally, in order to promote explainability in machine learning and to provide an example for the upcoming ChaLearn challenges, we present a simple approach for explaining the predictions for job interview recommendations
|
Carme Julia, Angel Sappa, Felipe Lumbreras, & Antonio Lopez. (2008). Recovery of Surface Normals and Reflectance from Different Lighting Conditions. In 5th International Conference on Image Analysis and Recognition (Vol. 5112, 315–325). LNCS.
|
Partha Pratim Roy, Umapada Pal, & Josep Llados. (2008). Multi-oriented English Text Line Extraction using Background and Foreground Information. In Proceedings of the 8th IAPR International Workshop on Document Analysis Systems, (315–322).
|
Francisco Cruz, & Oriol Ramos Terrades. (2014). EM-Based Layout Analysis Method for Structured Documents. In 22nd International Conference on Pattern Recognition (pp. 315–320).
Abstract: In this paper we present a method to perform layout analysis in structured documents. We proposed an EM-based algorithm to fit a set of Gaussian mixtures to the different regions according to the logical distribution along the page. After the convergence, we estimate the final shape of the regions according
to the parameters computed for each component of the mixture. We evaluated our method in the task of record detection in a collection of historical structured documents and performed a comparison with other previous works in this task.
|