Joost Van de Weijer, Robert Benavente, Maria Vanrell, Cordelia Schmid, Ramon Baldrich, Jacob Verbeek, et al. (2012). Color Naming. In Theo Gevers, Arjan Gijsenij, Joost Van de Weijer, & Jan-Mark Geusebroek (Eds.), Color in Computer Vision: Fundamentals and Applications (pp. 287–317). John Wiley & Sons, Ltd.
|
Jose Manuel Alvarez, & Antonio Lopez. (2012). Photometric Invariance by Machine Learning. In Jan-Mark Geusebroek Joost van de Weijer A. G. Theo Gevers (Ed.), Color in Computer Vision: Fundamentals and Applications (Vol. 7, pp. 113–134). iConcept Press Ltd.
|
Angel Sappa, David Geronimo, Fadi Dornaika, Mohammad Rouhani, & Antonio Lopez. (2012). Moving object detection from mobile platforms using stereo data registration. In Marek R. Ogiela, & Lakhmi C. Jain (Eds.), Computational Intelligence paradigms in advanced pattern classification (Vol. 386, pp. 25–37). Springer Berlin Heidelberg.
Abstract: This chapter describes a robust approach for detecting moving objects from on-board stereo vision systems. It relies on a feature point quaternion-based registration, which avoids common problems that appear when computationally expensive iterative-based algorithms are used on dynamic environments. The proposed approach consists of three main stages. Initially, feature points are extracted and tracked through consecutive 2D frames. Then, a RANSAC based approach is used for registering two point sets, with known correspondences in the 3D space. The computed 3D rigid displacement is used to map two consecutive 3D point clouds into the same coordinate system by means of the quaternion method. Finally, moving objects correspond to those areas with large 3D registration errors. Experimental results show the viability of the proposed approach to detect moving objects like vehicles or pedestrians in different urban scenarios.
Keywords: pedestrian detection
|
Laura Igual, Joan Carles Soliva, Antonio Hernandez, Sergio Escalera, Oscar Vilarroya, & Petia Radeva. (2012). A Supervised Graph-cut Deformable Model for Brain MRI Segmentation. Deformation models: tracking, animation and applications. In Computational Vision and Biomechanics. LNCS. Springer Netherlands.
|
Nataliya Shapovalova, Carles Fernandez, Xavier Roca, & Jordi Gonzalez. (2011). Semantics of Human Behavior in Image Sequences. In Albert Ali Salah, & (Ed.), Computer Analysis of Human Behavior (pp. 151–182). Springer London.
Abstract: Human behavior is contextualized and understanding the scene of an action is crucial for giving proper semantics to behavior. In this chapter we present a novel approach for scene understanding. The emphasis of this work is on the particular case of Human Event Understanding. We introduce a new taxonomy to organize the different semantic levels of the Human Event Understanding framework proposed. Such a framework particularly contributes to the scene understanding domain by (i) extracting behavioral patterns from the integrative analysis of spatial, temporal, and contextual evidence and (ii) integrative analysis of bottom-up and top-down approaches in Human Event Understanding. We will explore how the information about interactions between humans and their environment influences the performance of activity recognition, and how this can be extrapolated to the temporal domain in order to extract higher inferences from human events observed in sequences of images.
|
Debora Gil, Oriol Rodriguez-Leor, Petia Radeva, & Aura Hernandez-Sabate. (2007). Assessing Artery Motion Compensation in IVUS. In Computer Analysis Of Images And Patterns (Vol. 4673, pp. 213–220). Lecture Notes in Computer Science. Heidelberg: Springerlink.
Abstract: Cardiac dynamics suppression is a main issue for visual improvement and computation of tissue mechanical properties in IntraVascular UltraSound (IVUS). Although in recent times several motion compensation techniques have arisen, there is a lack of objective evaluation of motion reduction in in vivo pullbacks. We consider that the assessment protocol deserves special attention for the sake of a clinical applicability as reliable as possible. Our work focuses on defining a quality measure and a validation protocol assessing IVUS motion compensation. On the grounds of continuum mechanics laws we introduce a novel score measuring motion reduction in in vivo sequences. Synthetic experiments validate the proposed score as measure of motion parameters accuracy; while results in in vivo pullbacks show its reliability in clinical cases.
Keywords: validation standards; quality measures; IVUS motion compensation; conservation laws; Fourier development
|
Ole Vilhelm-Larsen, Petia Radeva, & Enric Marti. (1995). Guidelines for choosing optimal parameters of elasticity for snakes. In Computer Analysis Of Images And Patterns (Vol. 970, pp. 106–113). LNCS.
Abstract: This paper proposes a guidance in the process of choosing and using the parameters of elasticity of a snake in order to obtain a precise segmentation. A new two step procedure is defined based on upper and lower bounds on the parameters. Formulas, by which these bounds can be calculated for real images where parts of the contour may be missing, are presented. Experiments on segmentation of bone structures in X-ray images have verified the usefulness of the new procedure.
|
Michael Teutsch, Angel Sappa, & Riad I. Hammoud. (2022). Cross-Spectral Image Processing. In Computer Vision in the Infrared Spectrum. Synthesis Lectures on Computer Vision (pp. 23–34). SLCV. Springer.
Abstract: Although this book is on IR computer vision and its main focus lies on IR image and video processing and analysis, a special attention is dedicated to cross-spectral image processing due to the increasing number of publications and applications in this domain. In these cross-spectral frameworks, IR information is used together with information from other spectral bands to tackle some specific problems by developing more robust solutions. Tasks considered for cross-spectral processing are for instance dehazing, segmentation, vegetation index estimation, or face recognition. This increasing number of applications is motivated by cross- and multi-spectral camera setups available already on the market like for example smartphones, remote sensing multispectral cameras, or multi-spectral cameras for automotive systems or drones. In this chapter, different cross-spectral image processing techniques will be reviewed together with possible applications. Initially, image registration approaches for the cross-spectral case are reviewed: the registration stage is the first image processing task, which is needed to align images acquired by different sensors within the same reference coordinate system. Then, recent cross-spectral image colorization approaches, which are intended to colorize infrared images for different applications are presented. Finally, the cross-spectral image enhancement problem is tackled by including guided super resolution techniques, image dehazing approaches, cross-spectral filtering and edge detection. Figure 3.1 illustrates cross-spectral image processing stages as well as their possible connections. Table 3.1 presents some of the available public cross-spectral datasets generally used as reference data to evaluate cross-spectral image registration, colorization, enhancement, or exploitation results.
|
Michael Teutsch, Angel Sappa, & Riad I. Hammoud. (2022). Detection, Classification, and Tracking. In Computer Vision in the Infrared Spectrum. Synthesis Lectures on Computer Vision (pp. 35–58). SLCV. Springer.
Abstract: Automatic image and video exploitation or content analysis is a technique to extract higher-level information from a scene such as objects, behavior, (inter-)actions, environment, or even weather conditions. The relevant information is assumed to be contained in the two-dimensional signal provided in an image (width and height in pixels) or the three-dimensional signal provided in a video (width, height, and time). But also intermediate-level information such as object classes [196], locations [197], or motion [198] can help applications to fulfill certain tasks such as intelligent compression [199], video summarization [200], or video retrieval [201]. Usually, videos with their temporal dimension are a richer source of data compared to single images [202] and thus certain video content can be extracted from videos only such as object motion or object behavior. Often, machine learning or nowadays deep learning techniques are utilized to model prior knowledge about object or scene appearance using labeled training samples [203, 204]. After a learning phase, these models are then applied in real world applications, which is called inference.
|
Michael Teutsch, Angel Sappa, & Riad I. Hammoud. (2022). Image and Video Enhancement. In Computer Vision in the Infrared Spectrum. Synthesis Lectures on Computer Vision (pp. 9–21). SLCV. Springer.
Abstract: Image and video enhancement aims at improving the signal quality relative to imaging artifacts such as noise and blur or atmospheric perturbations such as turbulence and haze. It is usually performed in order to assist humans in analyzing image and video content or simply to present humans visually appealing images and videos. However, image and video enhancement can also be used as a preprocessing technique to ease the task and thus improve the performance of subsequent automatic image content analysis algorithms: preceding dehazing can improve object detection as shown by [23] or explicit turbulence modeling can improve moving object detection as discussed by [24]. But it remains an open question whether image and video enhancement should rather be performed explicitly as a preprocessing step or implicitly for example by feeding affected images directly to a neural network for image content analysis like object detection [25]. Especially for real-time video processing at low latency it can be better to handle image perturbation implicitly in order to minimize the processing time of an algorithm. This can be achieved by making algorithms for image content analysis robust or even invariant to perturbations such as noise or blur. Additionally, mistakes of an individual preprocessing module can obviously affect the quality of the entire processing pipeline.
|
David Geronimo, David Vazquez, & Arturo de la Escalera. (2017). Vision-Based Advanced Driver Assistance Systems. In Computer Vision in Vehicle Technology: Land, Sea, and Air.
Keywords: ADAS; Autonomous Driving
|
Santiago Segui, Laura Igual, Fernando Vilariño, Petia Radeva, Carolina Malagelada, Fernando Azpiroz, et al. (2008). Diagnostic System for Intestinal Motility Disfunctions Using Video Capsule Endoscopy. In and J.K. Tsotsos M. V. A. Gasteratos (Ed.), Computer Vision Systems. 6th International (Vol. 5008, 251–260). LNCS. Berlin Heidelberg: Springer-Verlag.
Abstract: Wireless Video Capsule Endoscopy is a clinical technique consisting of the analysis of images from the intestine which are pro- vided by an ingestible device with a camera attached to it. In this paper we propose an automatic system to diagnose severe intestinal motility disfunctions using the video endoscopy data. The system is based on the application of computer vision techniques within a machine learn- ing framework in order to obtain the characterization of diverse motil- ity events from video sequences. We present experimental results that demonstrate the effectiveness of the proposed system and compare them with the ground-truth provided by the gastroenterologists.
|
Sergio Escalera, Oriol Pujol, & Petia Radeva. (2008). Sub-Class Error-Correcting Output Codes. In Computer Vision Systems. 6th International Conference (Vol. 5008, 494–504).
|
Xavier Baro, & Jordi Vitria. (2008). Weighted Dissociated Diploes: An Extended Visual Feature Set. In Computer Vision Systems. 6th International Conference ICVS (Vol. 5008, 281–290). LNCS.
|
Debora Gil, F. Javier Sanchez, Gloria Fernandez Esparrach, & Jorge Bernal. (2015). 3D Stable Spatio-temporal Polyp Localization in Colonoscopy Videos. In Computer-Assisted and Robotic Endoscopy. Revised selected papers of Second International Workshop, CARE 2015, Held in Conjunction with MICCAI 2015 (Vol. 9515, pp. 140–152). LNCS.
Abstract: Computational intelligent systems could reduce polyp miss rate in colonoscopy for colon cancer diagnosis and, thus, increase the efficiency of the procedure. One of the main problems of existing polyp localization methods is a lack of spatio-temporal stability in their response. We propose to explore the response of a given polyp localization across temporal windows in order to select
those image regions presenting the highest stable spatio-temporal response.
Spatio-temporal stability is achieved by extracting 3D watershed regions on the
temporal window. Stability in localization response is statistically determined by analysis of the variance of the output of the localization method inside each 3D region. We have explored the benefits of considering spatio-temporal stability in two different tasks: polyp localization and polyp detection. Experimental results indicate an average improvement of 21:5% in polyp localization and 43:78% in polyp detection.
Keywords: Colonoscopy, Polyp Detection, Polyp Localization, Region Extraction, Watersheds
|