Mariella Dimiccoli. (2016). Figure-ground segregation: A fully nonlocal approach. VR - Vision Research, 126, 308–317.
Abstract: We present a computational model that computes and integrates in a nonlocal fashion several configural cues for automatic figure-ground segregation. Our working hypothesis is that the figural status of each pixel is a nonlocal function of several geometric shape properties and it can be estimated without explicitly relying on object boundaries. The methodology is grounded on two elements: multi-directional linear voting and nonlinear diffusion. A first estimation of the figural status of each pixel is obtained as a result of a voting process, in which several differently oriented line-shaped neighborhoods vote to express their belief about the figural status of the pixel. A nonlinear diffusion process is then applied to enforce the coherence of figural status estimates among perceptually homogeneous regions. Computer simulations fit human perception and match the experimental evidence that several cues cooperate in defining figure-ground segregation. The results of this work suggest that figure-ground segregation involves feedback from cells with larger receptive fields in higher visual cortical areas.
Keywords: Figure-ground segregation; Nonlocal approach; Directional linear voting; Nonlinear diffusion
|
C. Gratin, Jordi Vitria, F. Moreso, & D. Seron. (1994). Texture Classification using Neural Networks and Local Granulometries. In EURASIP Workshop, Mathematical Morphology and Its Applications to image Processing, J.Serra and P.Soille, editors (pp. 309–316).
Keywords: Neural Networks; Granulometry; Kidney; Texture; Classication
|
Ciprian Corneanu, Meysam Madadi, & Sergio Escalera. (2018). Deep Structure Inference Network for Facial Action Unit Recognition. In 15th European Conference on Computer Vision (Vol. 11216, pp. 309–324). LNCS.
Abstract: Facial expressions are combinations of basic components called Action Units (AU). Recognizing AUs is key for general facial expression analysis. Recently, efforts in automatic AU recognition have been dedicated to learning combinations of local features and to exploiting correlations between AUs. We propose a deep neural architecture that tackles both problems by combining learned local and global features in its initial stages and replicating a message passing algorithm between classes similar to a graphical model inference approach in later stages. We show that by training the model end-to-end with increased supervision we improve state-of-the-art by 5.3% and 8.2% performance on BP4D and DISFA datasets, respectively.
Keywords: Computer Vision; Machine Learning; Deep Learning; Facial Expression Analysis; Facial Action Units; Structure Inference
|
Miguel Oliveira, Victor Santos, Angel Sappa, P. Dias, & A. Moreira. (2016). Incremental Scenario Representations for Autonomous Driving using Geometric Polygonal Primitives. RAS - Robotics and Autonomous Systems, 83, 312–325.
Abstract: When an autonomous vehicle is traveling through some scenario it receives a continuous stream of sensor data. This sensor data arrives in an asynchronous fashion and often contains overlapping or redundant information. Thus, it is not trivial how a representation of the environment observed by the vehicle can be created and updated over time. This paper presents a novel methodology to compute an incremental 3D representation of a scenario from 3D range measurements. We propose to use macro scale polygonal primitives to model the scenario. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Furthermore, we propose mechanisms designed to update the geometric polygonal primitives over time whenever fresh sensor data is collected. Results show that the approach is capable of producing accurate descriptions of the scene, and that it is computationally very efficient when compared to other reconstruction techniques.
Keywords: Incremental scene reconstruction; Point clouds; Autonomous vehicles; Polygonal primitives
|
Margarita Torre, & Petia Radeva. (2000). Agricultural-Field Extraction on Aerial Images by Region Competition Algorithm. In 15 th International Conference on Pattern Recognition (Vol. 1, pp. 313–316).
|
Carles Sanchez, F. Javier Sanchez, Antoni Rosell, & Debora Gil. (2012). An illumination model of the trachea appearance in videobronchoscopy images. In Image Analysis and Recognition (Vol. 7325, pp. 313–320). LNCS. Springer Berlin Heidelberg.
Abstract: Videobronchoscopy is a medical imaging technique that allows interactive navigation inside the respiratory pathways. This imaging modality provides realistic images and allows non-invasive minimal intervention procedures. Tracheal procedures are routinary interventions that require assessment of the percentage of obstructed pathway for injury (stenosis) detection. Visual assessment in videobronchoscopic sequences requires high expertise of trachea anatomy and is prone to human error.
This paper introduces an automatic method for the estimation of steneosed trachea percentage reduction in videobronchoscopic images. We look for tracheal rings , whose deformation determines the degree of obstruction. For ring extraction , we present a ring detector based on an illumination and appearance model. This model allows us to parametrise the ring detection. Finally, we can infer optimal estimation parameters for any video resolution.
Keywords: Bronchoscopy, tracheal ring, stenosis assesment, trachea appearance model, segmentation
|
Marçal Rusiñol, David Aldavert, Dimosthenis Karatzas, Ricardo Toledo, & Josep Llados. (2011). Interactive Trademark Image Retrieval by Fusing Semantic and Visual Content. Advances in Information Retrieval. In P. Clough, C. Foley, C. Gurrin, G.J.F. Jones, W. Kraaij, H. Lee, et al. (Eds.), 33rd European Conference on Information Retrieval (Vol. 6611, pp. 314–325). LNCS. Berlin: Springer.
Abstract: In this paper we propose an efficient queried-by-example retrieval system which is able to retrieve trademark images by similarity from patent and trademark offices' digital libraries. Logo images are described by both their semantic content, by means of the Vienna codes, and their visual contents, by using shape and color as visual cues. The trademark descriptors are then indexed by a locality-sensitive hashing data structure aiming to perform approximate k-NN search in high dimensional spaces in sub-linear time. The resulting ranked lists are combined by using the Condorcet method and a relevance feedback step helps to iteratively revise the query and refine the obtained results. The experiments demonstrate the effectiveness and efficiency of this system on a realistic and large dataset.
|
Volkmar Frinken, Andreas Fischer, Horst Bunke, & Alicia Fornes. (2011). Co-training for Handwritten Word Recognition. In 11th International Conference on Document Analysis and Recognition (pp. 314–318).
Abstract: To cope with the tremendous variations of writing styles encountered between different individuals, unconstrained automatic handwriting recognition systems need to be trained on large sets of labeled data. Traditionally, the training data has to be labeled manually, which is a laborious and costly process. Semi-supervised learning techniques offer methods to utilize unlabeled data, which can be obtained cheaply in large amounts in order, to reduce the need for labeled data. In this paper, we propose the use of Co-Training for improving the recognition accuracy of two weakly trained handwriting recognition systems. The first one is based on Recurrent Neural Networks while the second one is based on Hidden Markov Models. On the IAM off-line handwriting database we demonstrate a significant increase of the recognition accuracy can be achieved with Co-Training for single word recognition.
|
Carme Julia, Angel Sappa, Felipe Lumbreras, & Antonio Lopez. (2008). Recovery of Surface Normals and Reflectance from Different Lighting Conditions. In 5th International Conference on Image Analysis and Recognition (Vol. 5112, 315–325). LNCS.
|
Partha Pratim Roy, Umapada Pal, & Josep Llados. (2008). Multi-oriented English Text Line Extraction using Background and Foreground Information. In Proceedings of the 8th IAPR International Workshop on Document Analysis Systems, (315–322).
|
Francisco Cruz, & Oriol Ramos Terrades. (2014). EM-Based Layout Analysis Method for Structured Documents. In 22nd International Conference on Pattern Recognition (pp. 315–320).
Abstract: In this paper we present a method to perform layout analysis in structured documents. We proposed an EM-based algorithm to fit a set of Gaussian mixtures to the different regions according to the logical distribution along the page. After the convergence, we estimate the final shape of the regions according
to the parameters computed for each component of the mixture. We evaluated our method in the task of record detection in a collection of historical structured documents and performed a comparison with other previous works in this task.
|
Kaida Xiao, Sophie Wuerger, Chenyang Fu, & Dimosthenis Karatzas. (2011). Unique Hue Data for Colour Appearance Models. Part i: Loci of Unique Hues and Hue Uniformity. CRA - Color Research & Application, 36(5), 316–323.
Abstract: Psychophysical experiments were conducted to assess unique hues on a CRT display for a large sample of colour-normal observers (n 1⁄4 185). These data were then used to evaluate the most commonly used colour appear- ance model, CIECAM02, by transforming the CIEXYZ tris- timulus values of the unique hues to the CIECAM02 colour appearance attributes, lightness, chroma and hue angle. We report two findings: (1) the hue angles derived from our unique hue data are inconsistent with the commonly used Natural Color System hues that are incorporated in the CIECAM02 model. We argue that our predicted unique hue angles (derived from our large dataset) provide a more reliable standard for colour management applications when the precise specification of these salient colours is im- portant. (2) We test hue uniformity for CIECAM02 in all four unique hues and show significant disagreements for all hues, except for unique red which seems to be invariant under lightness changes. Our dataset is useful to improve the CIECAM02 model as it provides reliable data for benchmarking.
Keywords: unique hues; colour appearance models; CIECAM02; hue uniformity
|
Yagmur Gucluturk, Umut Guclu, Xavier Baro, Hugo Jair Escalante, Isabelle Guyon, Sergio Escalera, et al. (2018). Multimodal First Impression Analysis with Deep Residual Networks. TAC - IEEE Transactions on Affective Computing, 8(3), 316–329.
Abstract: People form first impressions about the personalities of unfamiliar individuals even after very brief interactions with them. In this study we present and evaluate several models that mimic this automatic social behavior. Specifically, we present several models trained on a large dataset of short YouTube video blog posts for predicting apparent Big Five personality traits of people and whether they seem suitable to be recommended to a job interview. Along with presenting our audiovisual approach and results that won the third place in the ChaLearn First Impressions Challenge, we investigate modeling in different modalities including audio only, visual only, language only, audiovisual, and combination of audiovisual and language. Our results demonstrate that the best performance could be obtained using a fusion of all data modalities. Finally, in order to promote explainability in machine learning and to provide an example for the upcoming ChaLearn challenges, we present a simple approach for explaining the predictions for job interview recommendations
|
Ivet Rafegas, Maria Vanrell, Luis A Alexandre, & G. Arias. (2020). Understanding trained CNNs by indexing neuron selectivity. PRL - Pattern Recognition Letters, 136, 318–325.
Abstract: The impressive performance of Convolutional Neural Networks (CNNs) when solving different vision problems is shadowed by their black-box nature and our consequent lack of understanding of the representations they build and how these representations are organized. To help understanding these issues, we propose to describe the activity of individual neurons by their Neuron Feature visualization and quantify their inherent selectivity with two specific properties. We explore selectivity indexes for: an image feature (color); and an image label (class membership). Our contribution is a framework to seek or classify neurons by indexing on these selectivity properties. It helps to find color selective neurons, such as a red-mushroom neuron in layer Conv4 or class selective neurons such as dog-face neurons in layer Conv5 in VGG-M, and establishes a methodology to derive other selectivity properties. Indexing on neuron selectivity can statistically draw how features and classes are represented through layers in a moment when the size of trained nets is growing and automatic tools to index neurons can be helpful.
|
Marta Teres, & Eduard Vazquez. (2010). Museums, spaces and museographical resources. Current state and proposals for a multidisciplinary framework to open new perspectives. In Proceedings of The CREATE 2010 Conference (319–323).
Abstract: Two of the main aims of a museum are to communicate its heritage and to make enjoy its visitors. This communication can be done through the pieces itself and the museographical resources but also through the building, the interior design, the light and the colour. Art museums, in opposition with other museums, lack on the application of these additional resources. Such a work necessarily requires a multidisciplinary point of view for a holistic vision of all what a museum implies and to use all its potential as a tool of knowledge and culture for all the visitors.
|