|
Francesco Ciompi, Oriol Pujol, Carlo Gatta, Marina Alberti, Simone Balocco, Xavier Carrillo, et al. (2012). HoliMab: A Holistic Approach for Media-Adventitia Border Detection in Intravascular Ultrasound. MIA - Medical Image Analysis, 16(6), 1085–1100.
Abstract: We present a fully automatic methodology for the detection of the Media-Adventitia border (MAb) in human coronary artery in Intravascular Ultrasound (IVUS) images. A robust border detection is achieved by means of a holistic interpretation of the detection problem where the target object, i.e. the media layer, is considered as part of the whole vessel in the image and all the relationships between tissues are learnt. A fairly general framework exploiting multi-class tissue characterization as well as contextual information on the morphology and the appearance of the tissues is presented. The methodology is (i) validated through an exhaustive comparison with both Inter-observer variability on two challenging databases and (ii) compared with state-of-the-art methods for the detection of the MAb in IVUS. The obtained averaged values for the mean radial distance and the percentage of area difference are 0.211 mm and 10.1%, respectively. The applicability of the proposed methodology to clinical practice is also discussed.
Keywords: Media–Adventitia border detection; Intravascular ultrasound; Multi-Scale Stacked Sequential Learning; Error-correcting output codes; Holistic segmentation
|
|
|
Santiago Segui, Michal Drozdzal, Fernando Vilariño, Carolina Malagelada, Fernando Azpiroz, Petia Radeva, et al. (2012). Categorization and Segmentation of Intestinal Content Frames for Wireless Capsule Endoscopy. TITB - IEEE Transactions on Information Technology in Biomedicine, 16(6), 1341–1352.
Abstract: Wireless capsule endoscopy (WCE) is a device that allows the direct visualization of gastrointestinal tract with minimal discomfort for the patient, but at the price of a large amount of time for screening. In order to reduce this time, several works have proposed to automatically remove all the frames showing intestinal content. These methods label frames as {intestinal content – clear} without discriminating between types of content (with different physiological meaning) or the portion of image covered. In addition, since the presence of intestinal content has been identified as an indicator of intestinal motility, its accurate quantification can show a potential clinical relevance. In this paper, we present a method for the robust detection and segmentation of intestinal content in WCE images, together with its further discrimination between turbid liquid and bubbles. Our proposal is based on a twofold system. First, frames presenting intestinal content are detected by a support vector machine classifier using color and textural information. Second, intestinal content frames are segmented into {turbid, bubbles, and clear} regions. We show a detailed validation using a large dataset. Our system outperforms previous methods and, for the first time, discriminates between turbid from bubbles media.
|
|
|
Antonio Hernandez, Carlo Gatta, Sergio Escalera, Laura Igual, Victoria Martin-Yuste, Manel Sabate, et al. (2012). Accurate coronary centerline extraction, caliber estimation and catheter detection in angiographies. TITB - IEEE Transactions on Information Technology in Biomedicine, 16(6), 1332–1340.
Abstract: Segmentation of coronary arteries in X-Ray angiography is a fundamental tool to evaluate arterial diseases and choose proper coronary treatment. The accurate segmentation of coronary arteries has become an important topic for the registration of different modalities which allows physicians rapid access to different medical imaging information from Computed Tomography (CT) scans or Magnetic Resonance Imaging (MRI). In this paper, we propose an accurate fully automatic algorithm based on Graph-cuts for vessel centerline extraction, caliber estimation, and catheter detection. Vesselness, geodesic paths, and a new multi-scale edgeness map are combined to customize the Graph-cuts approach to the segmentation of tubular structures, by means of a global optimization of the Graph-cuts energy function. Moreover, a novel supervised learning methodology that integrates local and contextual information is proposed for automatic catheter detection. We evaluate the method performance on three datasets coming from different imaging systems. The method performs as good as the expert observer w.r.t. centerline detection and caliber estimation. Moreover, the method discriminates between arteries and catheter with an accuracy of 96.5%, sensitivity of 72%, and precision of 97.4%.
|
|
|
T. Mouats, N. Aouf, Angel Sappa, Cristhian A. Aguilera-Carrasco, & Ricardo Toledo. (2015). Multi-Spectral Stereo Odometry. TITS - IEEE Transactions on Intelligent Transportation Systems, 16(3), 1210–1224.
Abstract: In this paper, we investigate the problem of visual odometry for ground vehicles based on the simultaneous utilization of multispectral cameras. It encompasses a stereo rig composed of an optical (visible) and thermal sensors. The novelty resides in the localization of the cameras as a stereo setup rather
than two monocular cameras of different spectrums. To the best of our knowledge, this is the first time such task is attempted. Log-Gabor wavelets at different orientations and scales are used to extract interest points from both images. These are then described using a combination of frequency and spatial information within the local neighborhood. Matches between the pairs of multimodal images are computed using the cosine similarity function based
on the descriptors. Pyramidal Lucas–Kanade tracker is also introduced to tackle temporal feature matching within challenging sequences of the data sets. The vehicle egomotion is computed from the triangulated 3-D points corresponding to the matched features. A windowed version of bundle adjustment incorporating
Gauss–Newton optimization is utilized for motion estimation. An outlier removal scheme is also included within the framework to deal with outliers. Multispectral data sets were generated and used as test bed. They correspond to real outdoor scenarios captured using our multimodal setup. Finally, detailed results validating the proposed strategy are illustrated.
Keywords: Egomotion estimation; feature matching; multispectral odometry (MO); optical flow; stereo odometry; thermal imagery
|
|
|
Alejandro Gonzalez Alzate, Zhijie Fang, Yainuvis Socarras, Joan Serrat, David Vazquez, Jiaolong Xu, et al. (2016). Pedestrian Detection at Day/Night Time with Visible and FIR Cameras: A Comparison. SENS - Sensors, 16(6), 820.
Abstract: Despite all the significant advances in pedestrian detection brought by computer vision for driving assistance, it is still a challenging problem. One reason is the extremely varying lighting conditions under which such a detector should operate, namely day and night time. Recent research has shown that the combination of visible and non-visible imaging modalities may increase detection accuracy, where the infrared spectrum plays a critical role. The goal of this paper is to assess the accuracy gain of different pedestrian models (holistic, part-based, patch-based) when training with images in the far infrared spectrum. Specifically, we want to compare detection accuracy on test images recorded at day and nighttime if trained (and tested) using (a) plain color images, (b) just infrared images and (c) both of them. In order to obtain results for the last item we propose an early fusion approach to combine features from both modalities. We base the evaluation on a new dataset we have built for this purpose as well as on the publicly available KAIST multispectral dataset.
Keywords: Pedestrian Detection; FIR
|
|
|
Angel Sappa, P. Carvajal, Cristhian A. Aguilera-Carrasco, Miguel Oliveira, Dennis Romero, & Boris X. Vintimilla. (2016). Wavelet based visible and infrared image fusion: a comparative study. SENS - Sensors, 16(6), 1–15.
Abstract: This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and Long Wave InfraRed (LWIR).
Keywords: Image fusion; fusion evaluation metrics; visible and infrared imaging; discrete wavelet transform
|
|
|
C. Alejandro Parraga, & Arash Akbarinia. (2016). Colour Constancy as a Product of Dynamic Centre-Surround Adaptation. In 16th Annual meeting in Vision Sciences Society (Vol. 16).
Abstract: Colour constancy refers to the human visual system's ability to preserve the perceived colour of objects despite changes in the illumination. Its exact mechanisms are unknown, although a number of systems ranging from retinal to cortical and memory are thought to play important roles. The strength of the perceptual shift necessary to preserve these colours is usually estimated by the vectorial distances from an ideal match (or canonical illuminant). In this work we explore how much of the colour constancy phenomenon could be explained by well-known physiological properties of V1 and V2 neurons whose receptive fields (RF) vary according to the contrast and orientation of surround stimuli. Indeed, it has been shown that both RF size and the normalization occurring between centre and surround in cortical neurons depend on the local properties of surrounding stimuli. Our stating point is the construction of a computational model which includes this dynamical centre-surround adaptation by means of two overlapping asymmetric Gaussian kernels whose variances are adjusted to the contrast of surrounding pixels to represent the changes in RF size of cortical neurons and the weights of their respective contributions are altered according to differences in centre-surround contrast and orientation. The final output of the model is obtained after convolving an image with this dynamical operator and an estimation of the illuminant is obtained by considering the contrast of the far surround. We tested our algorithm on naturalistic stimuli from several benchmark datasets. Our results show that although our model does not require any training, its performance against the state-of-the-art is highly competitive, even outperforming learning-based algorithms in some cases. Indeed, these results are very encouraging if we consider that they were obtained with the same parameters for all datasets (i.e. just like the human visual system operates).
|
|
|
Wenjuan Gong, Xuena Zhang, Jordi Gonzalez, Andrews Sobral, Thierry Bouwmans, Changhe Tu, et al. (2016). Human Pose Estimation from Monocular Images: A Comprehensive Survey. SENS - Sensors, 16(12), 1966.
Abstract: Human pose estimation refers to the estimation of the location of body parts and how they are connected in an image. Human pose estimation from monocular images has wide applications (e.g., image indexing). Several surveys on human pose estimation can be found in the literature, but they focus on a certain category; for example, model-based approaches or human motion analysis, etc. As far as we know, an overall review of this problem domain has yet to be provided. Furthermore, recent advancements based on deep learning have brought novel algorithms for this problem. In this paper, a comprehensive survey of human pose estimation from monocular images is carried out including milestone works and recent advancements. Based on one standard pipeline for the solution of computer vision problems, this survey splits the problem into several modules: feature extraction and description, human body models, and modeling
methods. Problem modeling methods are approached based on two means of categorization in this survey. One way to categorize includes top-down and bottom-up methods, and another way includes generative and discriminative methods. Considering the fact that one direct application of human pose estimation is to provide initialization for automatic video surveillance, there are additional sections for motion-related methods in all modules: motion features, motion models, and motion-based methods. Finally, the paper also collects 26 publicly available data sets for validation and provides error measurement methods that are frequently used.
Keywords: human pose estimation; human bodymodels; generativemethods; discriminativemethods; top-down methods; bottom-up methods
|
|
|
Meysam Madadi, Sergio Escalera, Xavier Baro, & Jordi Gonzalez. (2022). End-to-end Global to Local CNN Learning for Hand Pose Recovery in Depth data. IETCV - IET Computer Vision, 16(1), 50–66.
Abstract: Despite recent advances in 3D pose estimation of human hands, especially thanks to the advent of CNNs and depth cameras, this task is still far from being solved. This is mainly due to the highly non-linear dynamics of fingers, which make hand model training a challenging task. In this paper, we exploit a novel hierarchical tree-like structured CNN, in which branches are trained to become specialized in predefined subsets of hand joints, called local poses. We further fuse local pose features, extracted from hierarchical CNN branches, to learn higher order dependencies among joints in the final pose by end-to-end training. Lastly, the loss function used is also defined to incorporate appearance and physical constraints about doable hand motion and deformation. Finally, we introduce a non-rigid data augmentation approach to increase the amount of training depth data. Experimental results suggest that feeding a tree-shaped CNN, specialized in local poses, into a fusion network for modeling joints correlations and dependencies, helps to increase the precision of final estimations, outperforming state-of-the-art results on NYU and SyntheticHand datasets.
Keywords: Computer vision; data acquisition; human computer interaction; learning (artificial intelligence); pose estimation
|
|
|
Marco Pedersoli, Jordi Gonzalez, Xu Hu, & Xavier Roca. (2014). Toward Real-Time Pedestrian Detection Based on a Deformable Template Model. TITS - IEEE Transactions on Intelligent Transportation Systems, 15(1), 355–364.
Abstract: Most advanced driving assistance systems already include pedestrian detection systems. Unfortunately, there is still a tradeoff between precision and real time. For a reliable detection, excellent precision-recall such a tradeoff is needed to detect as many pedestrians as possible while, at the same time, avoiding too many false alarms; in addition, a very fast computation is needed for fast reactions to dangerous situations. Recently, novel approaches based on deformable templates have been proposed since these show a reasonable detection performance although they are computationally too expensive for real-time performance. In this paper, we present a system for pedestrian detection based on a hierarchical multiresolution part-based model. The proposed system is able to achieve state-of-the-art detection accuracy due to the local deformations of the parts while exhibiting a speedup of more than one order of magnitude due to a fast coarse-to-fine inference technique. Moreover, our system explicitly infers the level of resolution available so that the detection of small examples is feasible with a very reduced computational cost. We conclude this contribution by presenting how a graphics processing unit-optimized implementation of our proposed system is suitable for real-time pedestrian detection in terms of both accuracy and speed.
|
|
|
Naveen Onkarappa, & Angel Sappa. (2014). Speed and Texture: An Empirical Study on Optical-Flow Accuracy in ADAS Scenarios. TITS - IEEE Transactions on Intelligent Transportation Systems, 15(1), 136–147.
Abstract: IF: 3.064
Increasing mobility in everyday life has led to the concern for the safety of automotives and human life. Computer vision has become a valuable tool for developing driver assistance applications that target such a concern. Many such vision-based assisting systems rely on motion estimation, where optical flow has shown its potential. A variational formulation of optical flow that achieves a dense flow field involves a data term and regularization terms. Depending on the image sequence, the regularization has to appropriately be weighted for better accuracy of the flow field. Because a vehicle can be driven in different kinds of environments, roads, and speeds, optical-flow estimation has to be accurately computed in all such scenarios. In this paper, we first present the polar representation of optical flow, which is quite suitable for driving scenarios due to the possibility that it offers to independently update regularization factors in different directional components. Then, we study the influence of vehicle speed and scene texture on optical-flow accuracy. Furthermore, we analyze the relationships of these specific characteristics on a driving scenario (vehicle speed and road texture) with the regularization weights in optical flow for better accuracy. As required by the work in this paper, we have generated several synthetic sequences along with ground-truth flow fields.
|
|
|
Ferran Diego, Joan Serrat, & Antonio Lopez. (2013). Joint spatio-temporal alignment of sequences. TMM - IEEE Transactions on Multimedia, 15(6), 1377–1387.
Abstract: Video alignment is important in different areas of computer vision such as wide baseline matching, action recognition, change detection, video copy detection and frame dropping prevention. Current video alignment methods usually deal with a relatively simple case of fixed or rigidly attached cameras or simultaneous acquisition. Therefore, in this paper we propose a joint video alignment for bringing two video sequences into a spatio-temporal alignment. Specifically, the novelty of the paper is to formulate the video alignment to fold the spatial and temporal alignment into a single alignment framework. This simultaneously satisfies a frame-correspondence and frame-alignment similarity; exploiting the knowledge among neighbor frames by a standard pairwise Markov random field (MRF). This new formulation is able to handle the alignment of sequences recorded at different times by independent moving cameras that follows a similar trajectory, and also generalizes the particular cases that of fixed geometric transformation and/or linear temporal mapping. We conduct experiments on different scenarios such as sequences recorded simultaneously or by moving cameras to validate the robustness of the proposed approach. The proposed method provides the highest video alignment accuracy compared to the state-of-the-art methods on sequences recorded from vehicles driving along the same track at different times.
Keywords: video alignment
|
|
|
Alicia Fornes, Anjan Dutta, Albert Gordo, & Josep Llados. (2012). CVC-MUSCIMA: A Ground-Truth of Handwritten Music Score Images for Writer Identification and Staff Removal. IJDAR - International Journal on Document Analysis and Recognition, 15(3), 243–251.
Abstract: 0,405JCR
The analysis of music scores has been an active research field in the last decades. However, there are no publicly available databases of handwritten music scores for the research community. In this paper we present the CVC-MUSCIMA database and ground-truth of handwritten music score images. The dataset consists of 1,000 music sheets written by 50 different musicians. It has been especially designed for writer identification and staff removal tasks. In addition to the description of the dataset, ground-truth, partitioning and evaluation metrics, we also provide some base-line results for easing the comparison between different approaches.
Keywords: Music scores; Handwritten documents; Writer identification; Staff removal; Performance evaluation; Graphics recognition; Ground truths
|
|
|
Partha Pratim Roy, Umapada Pal, & Josep Llados. (2012). Text line extraction in graphical documents using background and foreground. IJDAR - International Journal on Document Analysis and Recognition, 15(3), 227–241.
Abstract: 0,405 JCR
In graphical documents (e.g., maps, engineering drawings), artistic documents etc., the text lines are annotated in multiple orientations or curvilinear way to illustrate different locations or symbols. For the optical character recognition of such documents, individual text lines from the documents need to be extracted. In this paper, we propose a novel method to segment such text lines and the method is based on the foreground and background information of the text components. To effectively utilize the background information, a water reservoir concept is used here. In the proposed scheme, at first, individual components are detected and grouped into character clusters in a hierarchical way using size and positional information. Next, the clusters are extended in two extreme sides to determine potential candidate regions. Finally, with the help of these candidate regions,
individual lines are extracted. The experimental results are presented on different datasets of graphical documents, camera-based warped documents, noisy images containing seals, etc. The results demonstrate that our approach is robust and invariant to size and orientation of the text lines present in
the document.
|
|
|
Jiaolong Xu, David Vazquez, Antonio Lopez, Javier Marin, & Daniel Ponsa. (2014). Learning a Part-based Pedestrian Detector in Virtual World. TITS - IEEE Transactions on Intelligent Transportation Systems, 15(5), 2121–2131.
Abstract: Detecting pedestrians with on-board vision systems is of paramount interest for assisting drivers to prevent vehicle-to-pedestrian accidents. The core of a pedestrian detector is its classification module, which aims at deciding if a given image window contains a pedestrian. Given the difficulty of this task, many classifiers have been proposed during the last fifteen years. Among them, the so-called (deformable) part-based classifiers including multi-view modeling are usually top ranked in accuracy. Training such classifiers is not trivial since a proper aspect clustering and spatial part alignment of the pedestrian training samples are crucial for obtaining an accurate classifier. In this paper, first we perform automatic aspect clustering and part alignment by using virtual-world pedestrians, i.e., human annotations are not required. Second, we use a mixture-of-parts approach that allows part sharing among different aspects. Third, these proposals are integrated in a learning framework which also allows to incorporate real-world training data to perform domain adaptation between virtual- and real-world cameras. Overall, the obtained results on four popular on-board datasets show that our proposal clearly outperforms the state-of-the-art deformable part-based detector known as latent SVM.
Keywords: Domain Adaptation; Pedestrian Detection; Virtual Worlds
|
|