Naila Murray, Maria Vanrell, Xavier Otazu, & C. Alejandro Parraga. (2011). Saliency Estimation Using a Non-Parametric Low-Level Vision Model. In IEEE conference on Computer Vision and Pattern Recognition (pp. 433–440).
Abstract: Many successful models for predicting attention in a scene involve three main steps: convolution with a set of filters, a center-surround mechanism and spatial pooling to construct a saliency map. However, integrating spatial information and justifying the choice of various parameter values remain open problems. In this paper we show that an efficient model of color appearance in human vision, which contains a principled selection of parameters as well as an innate spatial pooling mechanism, can be generalized to obtain a saliency model that outperforms state-of-the-art models. Scale integration is achieved by an inverse wavelet transform over the set of scale-weighted center-surround responses. The scale-weighting function (termed ECSF) has been optimized to better replicate psychophysical data on color appearance, and the appropriate sizes of the center-surround inhibition windows have been determined by training a Gaussian Mixture Model on eye-fixation data, thus avoiding ad-hoc parameter selection. Additionally, we conclude that the extension of a color appearance model to saliency estimation adds to the evidence for a common low-level visual front-end for different visual tasks.
Keywords: Gaussian mixture model;ad hoc parameter selection;center-surround inhibition windows;center-surround mechanism;color appearance model;convolution;eye-fixation data;human vision;innate spatial pooling mechanism;inverse wavelet transform;low-level visual front-end;nonparametric low-level vision model;saliency estimation;saliency map;scale integration;scale-weighted center-surround response;scale-weighting function;visual task;Gaussian processes;biology;biology computing;colour vision;computer vision;visual perception;wavelet transforms
|
Miguel Oliveira, Angel Sappa, & V.Santos. (2011). Unsupervised Local Color Correction for Coarsely Registered Images. In IEEE conference on Computer Vision and Pattern Recognition (pp. 201–208).
Abstract: The current paper proposes a new parametric local color correction technique. Initially, several color transfer functions are computed from the output of the mean shift color segmentation algorithm. Secondly, color influence maps are calculated. Finally, the contribution of every color transfer function is merged using the weights from the color influence maps. The proposed approach is compared with both global and local color correction approaches. Results show that our method outperforms the technique ranked first in a recent performance evaluation on this topic. Moreover, the proposed approach is computed in about one tenth of the time.
|
Naveen Onkarappa, Sujay M. Veerabhadrappa, & Angel Sappa. (2012). Optical Flow in Onboard Applications: A Study on the Relationship Between Accuracy and Scene Texture. In 4th International Conference on Signal and Image Processing (Vol. 221, pp. 257–267).
Abstract: Optical flow has got a major role in making advanced driver assistance systems (ADAS) a reality. ADAS applications are expected to perform efficiently in all kinds of environments, those are highly probable, that one can drive the vehicle in different kinds of roads, times and seasons. In this work, we study the relationship of optical flow with different roads, that is by analyzing optical flow accuracy on different road textures. Texture measures such as TeX , TeX and TeX are evaluated for this purpose. Further, the relation of regularization weight to the flow accuracy in the presence of different textures is also analyzed. Additionally, we present a framework to generate synthetic sequences of different textures in ADAS scenarios with ground-truth optical flow.
|
Monica Piñol, Angel Sappa, & Ricardo Toledo. (2012). MultiTable Reinforcement for Visual Object Recognition. In 4th International Conference on Signal and Image Processing (Vol. 221, pp. 469–480). LNCS. Springer India.
Abstract: This paper presents a bag of feature based method for visual object recognition. Our contribution is focussed on the selection of the best feature descriptor. It is implemented by using a novel multi-table reinforcement learning method that selects among five of classical descriptors (i.e., Spin, SIFT, SURF, C-SIFT and PHOW) the one that best describes each image. Experimental results and comparisons are provided showing the improvements achieved with the proposed approach.
|
Petia Radeva, Ricardo Toledo, Craig Von Land, & Juan J. Villanueva. (1998). 3D Vessel Reconstruction from Biplane Angiograms using Snakes..
|
Petia Radeva, Ricardo Toledo, Craig Von Land, & Juan J. Villanueva. (1998). 3D Dynamic Model of the Coronary Tree..
|
Joost Van de Weijer, & Fahad Shahbaz Khan. (2013). Fusing Color and Shape for Bag-of-Words Based Object Recognition. In 4th Computational Color Imaging Workshop (Vol. 7786, pp. 25–34). Springer Berlin Heidelberg.
Abstract: In this article we provide an analysis of existing methods for the incorporation of color in bag-of-words based image representations. We propose a list of desired properties on which bases fusing methods can be compared. We discuss existing methods and indicate shortcomings of the two well-known fusing methods, namely early and late fusion. Several recent works have addressed these shortcomings by exploiting top-down information in the bag-of-words pipeline: color attention which is motivated from human vision, and Portmanteau vocabularies which are based on information theoretic compression of product vocabularies. We point out several remaining challenges in cue fusion and provide directions for future research.
Keywords: Object Recognition; color features; bag-of-words; image classification
|
Zhong Jin, Franck Davoine, & Zhen Lou. (2003). Facial expression analysis by using KPCA.
|
Patricia Suarez, Angel Sappa, & Boris X. Vintimilla. (2017). Colorizing Infrared Images through a Triplet Conditional DCGAN Architecture. In 19th international conference on image analysis and processing.
Abstract: This paper focuses on near infrared (NIR) image colorization by using a Conditional Deep Convolutional Generative Adversarial Network (CDCGAN) architecture model. The proposed architecture is based on the usage of a conditional probabilistic generative model. Firstly, it learns to colorize the given input image, by using a triplet model architecture that tackle every channel in an independent way. In the proposed model, the nal layer of red channel consider the infrared image to enhance the details, resulting in a sharp RGB image. Then, in the second stage, a discriminative model is used to estimate the probability that the generated image came from the training dataset, rather than the image automatically generated. Experimental results with a large set of real images are provided showing the validity of the proposed approach. Additionally, the proposed approach is compared with a state of the art approach showing better results.
Keywords: CNN in Multispectral Imaging; Image Colorization
|
Dennis G.Romero, Anselmo Frizera, Angel Sappa, Boris X. Vintimilla, & Teodiano F.Bastos. (2015). A predictive model for human activity recognition by observing actions and context. In Advanced Concepts for Intelligent Vision Systems, Proceedings of 16th International Conference, ACIVS 2015 (Vol. 9386, pp. 323–333). LNCS. Springer International Publishing.
Abstract: This paper presents a novel model to estimate human activities — a human activity is defined by a set of human actions. The proposed approach is based on the usage of Recurrent Neural Networks (RNN) and Bayesian inference through the continuous monitoring of human actions and its surrounding environment. In the current work human activities are inferred considering not only visual analysis but also additional resources; external sources of information, such as context information, are incorporated to contribute to the activity estimation. The novelty of the proposed approach lies in the way the information is encoded, so that it can be later associated according to a predefined semantic structure. Hence, a pattern representing a given activity can be defined by a set of actions, plus contextual information or other kind of information that could be relevant to describe the activity. Experimental results with real data are provided showing the validity of the proposed approach.
|
Joan Serrat, Javier Varona, Antonio Lopez, Xavier Roca, & Juan J. Villanueva. (2001). P3: a three-dimensional digitizer prototype..
|
Debora Gil, Guillermo Torres, & Carles Sanchez. (2023). Transforming radiomic features into radiological words. In IEEE International Symposium on Biomedical Imaging.
|
Pau Cano, Debora Gil, & Eva Musulen. (2023). Towards automatic detection of helicobacter pylori in histological samples of gastric tissue. In IEEE International Symposium on Biomedical Imaging.
|
Guillermo Torres, Debora Gil, Antonio Rosell, Sonia Baeza, & Carles Sanchez. (2023). A radiomic biopsy for virtual histology of pulmonary nodules. In IEEE International Symposium on Biomedical Imaging.
|
Xavier Otazu, Olivier Penacchio, & Xim Cerda-Company. (2015). Brightness and colour induction through contextual influences in V1. In Scottish Vision Group 2015 SGV2015 (Vol. 12, pp. 1208–2012).
|