Fernando Barrera, Felipe Lumbreras, & Angel Sappa. (2012). Evaluation of Similarity Functions in Multimodal Stereo. In 9th International Conference on Image Analysis and Recognition (Vol. 7324, pp. 320–329). LNCS. Springer Berlin Heidelberg.
Abstract: This paper presents an evaluation framework for multimodal stereo matching, which allows to compare the performance of four similarity functions. Additionally, it presents details of a multimodal stereo head that supply thermal infrared and color images, as well as, aspects of its calibration and rectification. The pipeline includes a novel method for the disparity selection, which is suitable for evaluating the similarity functions. Finally, a benchmark for comparing different initializations of the proposed framework is presented. Similarity functions are based on mutual information, gradient orientation and scale space representations. Their evaluation is performed using two metrics: i) disparity error, and ii) number of correct matches on planar regions. In addition to the proposed evaluation, the current paper also shows that 3D sparse representations can be recovered from such a multimodal stereo head.
Keywords: Aveiro, Portugal
|
Michal Drozdzal, Santiago Segui, Petia Radeva, Carolina Malagelada, Fernando Azpiroz, & Jordi Vitria. (2015). Motility bar: a new tool for motility analysis of endoluminal videos. CBM - Computers in Biology and Medicine, 65, 320–330.
Abstract: Wireless Capsule Endoscopy (WCE) provides a new perspective of the small intestine, since it enables, for the first time, visualization of the entire organ. However, the long visual video analysis time, due to the large number of data in a single WCE study, was an important factor impeding the widespread use of the capsule as a tool for intestinal abnormalities detection. Therefore, the introduction of WCE triggered a new field for the application of computational methods, and in particular, of computer vision. In this paper, we follow the computational approach and come up with a new perspective on the small intestine motility problem. Our approach consists of three steps: first, we review a tool for the visualization of the motility information contained in WCE video; second, we propose algorithms for the characterization of two motility building-blocks: contraction detector and lumen size estimation; finally, we introduce an approach to detect segments of stable motility behavior. Our claims are supported by an evaluation performed with 10 WCE videos, suggesting that our methods ably capture the intestinal motility information.
Keywords: Small intestine; Motility; WCE; Computer vision; Image classification
|
Ernest Valveny, Philippe Dosch, & Alicia Fornes. (2008). Report on the Third Contest on Symbol Recognition. In J.M. Ogier J. L. W. Liu (Ed.), Graphics Recognition: Recent Advances and New Opportunities (Vol. 5046, 321–328). LNCS.
|
Marçal Rusiñol, Josep Llados, & Gemma Sanchez. (2010). Symbol Spotting in Vectorized Technical Drawings Through a Lookup Table of Region Strings. PAA - Pattern Analysis and Applications, 13(3), 321–331.
Abstract: In this paper, we address the problem of symbol spotting in technical document images applied to scanned and vectorized line drawings. Like any information spotting architecture, our approach has two components. First, symbols are decomposed in primitives which are compactly represented and second a primitive indexing structure aims to efficiently retrieve similar primitives. Primitives are encoded in terms of attributed strings representing closed regions. Similar strings are clustered in a lookup table so that the set median strings act as indexing keys. A voting scheme formulates hypothesis in certain locations of the line drawing image where there is a high presence of regions similar to the queried ones, and therefore, a high probability to find the queried graphical symbol. The proposed approach is illustrated in a framework consisting in spotting furniture symbols in architectural drawings. It has been proved to work even in the presence of noise and distortion introduced by the scanning and raster-to-vector processes.
|
David Aldavert, Arnau Ramisa, Ramon Lopez de Mantaras, & Ricardo Toledo. (2010). Real-time Object Segmentation using a Bag of Features Approach. In J.Aguilar. A. M. In R.Alquezar (Ed.), 13th International Conference of the Catalan Association for Artificial Intelligence (Vol. 220, 321–329). IOS Press Amsterdam,.
Abstract: In this paper, we propose an object segmentation framework, based on the popular bag of features (BoF), which can process several images per second while achieving a good segmentation accuracy assigning an object category to every pixel of the image. We propose an efficient color descriptor to complement the information obtained by a typical gradient-based local descriptor. Results show that color proves to be a useful cue to increase the segmentation accuracy, specially in large homogeneous regions. Then, we extend the Hierarchical K-Means codebook using the recently proposed Vector of Locally Aggregated Descriptors method. Finally, we show that the BoF method can be easily parallelized since it is applied locally, thus the time necessary to process an image is further reduced. The performance of the proposed method is evaluated in the standard PASCAL 2007 Segmentation Challenge object segmentation dataset.
Keywords: Object Segmentation; Bag Of Features; Feature Quantization; Densely sampled descriptors
|
Francisco Javier Orozco, Ognjen Rudovic, Jordi Gonzalez, & Maja Pantic. (2013). Hierarchical On-line Appearance-Based Tracking for 3D Head Pose, Eyebrows, Lips, Eyelids and Irises. IMAVIS - Image and Vision Computing, 31(4), 322–340.
Abstract: In this paper, we propose an On-line Appearance-Based Tracker (OABT) for simultaneous tracking of 3D head pose, lips, eyebrows, eyelids and irises in monocular video sequences. In contrast to previously proposed tracking approaches, which deal with face and gaze tracking separately, our OABT can also be used for eyelid and iris tracking, as well as 3D head pose, lips and eyebrows facial actions tracking. Furthermore, our approach applies an on-line learning of changes in the appearance of the tracked target. Hence, the prior training of appearance models, which usually requires a large amount of labeled facial images, is avoided. Moreover, the proposed method is built upon a hierarchical combination of three OABTs, which are optimized using a Levenberg–Marquardt Algorithm (LMA) enhanced with line-search procedures. This, in turn, makes the proposed method robust to changes in lighting conditions, occlusions and translucent textures, as evidenced by our experiments. Finally, the proposed method achieves head and facial actions tracking in real-time.
Keywords: On-line appearance models; Levenberg–Marquardt algorithm; Line-search optimization; 3D face tracking; Facial action tracking; Eyelid tracking; Iris tracking
|
Ruth Aylett, Ginevra Castellano, Bogdan Raducanu, Ana Paiva, & Marc Hanheide. (2011). Long-term socially perceptive and interactive robot companions: challenges and future perspectives. In 13th International Conference on Multimodal Interaction (pp. 323–326). ACM.
Abstract: This paper gives a brief overview of the challenges for multi-model perception and generation applied to robot companions located in human social environments. It reviews the current position in both perception and generation and the immediate technical challenges and goes on to consider the extra issues raised by embodiment and social context. Finally, it briefly discusses the impact of systems that must function continually over months rather than just for a few hours.
Keywords: human-robot interaction, multimodal interaction, social robotics
|
Diego Cheda, Daniel Ponsa, & Antonio Lopez. (2012). Monocular Depth-based Background Estimation. In 7th International Conference on Computer Vision Theory and Applications (pp. 323–328).
Abstract: In this paper, we address the problem of reconstructing the background of a scene from a video sequence with occluding objects. The images are taken by hand-held cameras. Our method composes the background by selecting the appropriate pixels from previously aligned input images. To do that, we minimize a cost function that penalizes the deviations from the following assumptions: background represents objects whose distance to the camera is maximal, and background objects are stationary. Distance information is roughly obtained by a supervised learning approach that allows us to distinguish between close and distant image regions. Moving foreground objects are filtered out by using stationariness and motion boundary constancy measurements. The cost function is minimized by a graph cuts method. We demonstrate the applicability of our approach to recover an occlusion-free background in a set of sequences.
|
Dennis G.Romero, Anselmo Frizera, Angel Sappa, Boris X. Vintimilla, & Teodiano F.Bastos. (2015). A predictive model for human activity recognition by observing actions and context. In Advanced Concepts for Intelligent Vision Systems, Proceedings of 16th International Conference, ACIVS 2015 (Vol. 9386, pp. 323–333). LNCS. Springer International Publishing.
Abstract: This paper presents a novel model to estimate human activities — a human activity is defined by a set of human actions. The proposed approach is based on the usage of Recurrent Neural Networks (RNN) and Bayesian inference through the continuous monitoring of human actions and its surrounding environment. In the current work human activities are inferred considering not only visual analysis but also additional resources; external sources of information, such as context information, are incorporated to contribute to the activity estimation. The novelty of the proposed approach lies in the way the information is encoded, so that it can be later associated according to a predefined semantic structure. Hence, a pattern representing a given activity can be defined by a set of actions, plus contextual information or other kind of information that could be relevant to describe the activity. Experimental results with real data are provided showing the validity of the proposed approach.
|
Javier Vazquez, Maria Vanrell, & Robert Benavente. (2010). Color names as a constraint for Computer Vision problems. In Proceedings of The CREATE 2010 Conference (324–328).
Abstract: Computer Vision Problems are usually ill-posed. Constraining de gamut of possible solutions is then a necessary step. Many constrains for different problems have been developed during years. In this paper, we present a different way of constraining some of these problems: the use of color names. In particular, we will focus on segmentation, representation ans constancy.
|
R. Valenti, N. Sebe, & Theo Gevers. (2012). What are you looking at? Improving Visual gaze Estimation by Saliency. IJCV - International Journal of Computer Vision, 98(3), 324–334.
Abstract: Impact factor 2010: 5.15
Impact factor 2011/12?: 5.36
In this paper we present a novel mechanism to obtain enhanced gaze estimation for subjects looking at a scene or an image. The system makes use of prior knowledge about the scene (e.g. an image on a computer screen), to define a probability map of the scene the subject is gazing at, in order to find the most probable location. The proposed system helps in correcting the fixations which are erroneously estimated by the gaze estimation device by employing a saliency framework to adjust the resulting gaze point vector. The system is tested on three scenarios: using eye tracking data, enhancing a low accuracy webcam based eye tracker, and using a head pose tracker. The correlation between the subjects in the commercial eye tracking data is improved by an average of 13.91%. The correlation on the low accuracy eye gaze tracker is improved by 59.85%, and for the head pose tracker we obtain an improvement of 10.23%. These results show the potential of the system as a way to enhance and self-calibrate different visual gaze estimation systems.
|
Marc Oliu, Sarah Adel Bargal, Stan Sclaroff, Xavier Baro, & Sergio Escalera. (2022). Multi-varied Cumulative Alignment for Domain Adaptation. In 6th International Conference on Image Analysis and Processing (Vol. 13232, 324–334). LNCS.
Abstract: Domain Adaptation methods can be classified into two basic families of approaches: non-parametric and parametric. Non-parametric approaches depend on statistical indicators such as feature covariances to minimize the domain shift. Non-parametric approaches tend to be fast to compute and require no additional parameters, but they are unable to leverage probability density functions with complex internal structures. Parametric approaches, on the other hand, use models of the probability distributions as surrogates in minimizing the domain shift, but they require additional trainable parameters to model these distributions. In this work, we propose a new statistical approach to minimizing the domain shift based on stochastically projecting and evaluating the cumulative density function in both domains. As with non-parametric approaches, there are no additional trainable parameters. As with parametric approaches, the internal structure of both domains’ probability distributions is considered, thus leveraging a higher amount of information when reducing the domain shift. Evaluation on standard datasets used for Domain Adaptation shows better performance of the proposed model compared to non-parametric approaches while being competitive with parametric ones. (Code available at: https://github.com/moliusimon/mca).
Keywords: Domain Adaptation; Computer vision; Neural networks
|
Angel Sappa, Niki Aifanti, Sotiris Malassiotis, & Michael G. Strintzis. (2003). Monocular 3D Human Body Reconstruction Towards Depth Augmentation of Television Sequences. In IEEE International Conference on Image Processing, Barcelona, Spain, September 2003 (pp. 325–328).
|
Agnes Borras, & Josep Llados. (2005). Object Image Retrieval by Shape Content in Complex Scenes Using Geometric Constraints. In Pattern Recognition And Image Analysis (Vol. 3522, 325–332). Springer Link.
Abstract: This paper presents an image retrieval system based on 2D shape information. Query shape objects and database images are repre- sented by polygonal approximations of their contours. Afterwards they are encoded, using geometric features, in terms of predefined structures. Shapes are then located in database images by a voting procedure on the spatial domain. Then an alignment matching provides a probability value to rank de database image in the retrieval result. The method al- lows to detect a query object in database images even when they contain complex scenes. Also the shape matching tolerates partial occlusions and affine transformations as translation, rotation or scaling.
|
Shida Beigpour, & Joost Van de Weijer. (2011). Object Recoloring Based on Intrinsic Image Estimation. In 13th IEEE International Conference in Computer Vision (pp. 327–334).
Abstract: Object recoloring is one of the most popular photo-editing tasks. The problem of object recoloring is highly under-constrained, and existing recoloring methods limit their application to objects lit by a white illuminant. Application of these methods to real-world scenes lit by colored illuminants, multiple illuminants, or interreflections, results in unrealistic recoloring of objects. In this paper, we focus on the recoloring of single-colored objects presegmented from their background. The single-color constraint allows us to fit a more comprehensive physical model to the object. We demonstrate that this permits us to perform realistic recoloring of objects lit by non-white illuminants, and multiple illuminants. Moreover, the model allows for more realistic handling of illuminant alteration of the scene. Recoloring results captured by uncalibrated cameras demonstrate that the proposed framework obtains realistic recoloring for complex natural images. Furthermore we use the model to transfer color between objects and show that the results are more realistic than existing color transfer methods.
|