|
Frederic Sampedro, Sergio Escalera, & Anna Puig. (2014). Iterative Multiclass Multiscale Stacked Sequential Learning: definition and application to medical volume segmentation. PRL - Pattern Recognition Letters, 46, 1–10.
Abstract: In this work we present the iterative multi-class multi-scale stacked sequential learning framework (IMMSSL), a novel learning scheme that is particularly suited for medical volume segmentation applications. This model exploits the inherent voxel contextual information of the structures of interest in order to improve its segmentation performance results. Without any feature set or learning algorithm prior assumption, the proposed scheme directly seeks to learn the contextual properties of a region from the predicted classifications of previous classifiers within an iterative scheme. Performance results regarding segmentation accuracy in three two-class and multi-class medical volume datasets show a significant improvement with respect to state of the art alternatives. Due to its easiness of implementation and its independence of feature space and learning algorithm, the presented machine learning framework could be taken into consideration as a first choice in complex volume segmentation scenarios.
Keywords: Machine learning; Sequential learning; Multi-class problems; Contextual learning; Medical volume segmentation
|
|
|
Frederic Sampedro, & Sergio Escalera. (2015). Spatial codification of label predictions in Multi-scale Stacked Sequential Learning: A case study on multi-class medical volume segmentation. IETCV - IET Computer Vision, 9(3), 439–446.
Abstract: In this study, the authors propose the spatial codification of label predictions within the multi-scale stacked sequential learning (MSSL) framework, a successful learning scheme to deal with non-independent identically distributed data entries. After providing a motivation for this objective, they describe its theoretical framework based on the introduction of the blurred shape model as a smart descriptor to codify the spatial distribution of the predicted labels and define the new extended feature set for the second stacked classifier. They then particularise this scheme to be applied in volume segmentation applications. Finally, they test the implementation of the proposed framework in two medical volume segmentation datasets, obtaining significant performance improvements (with a 95% of confidence) in comparison to standard Adaboost classifier and classical MSSL approaches.
|
|
|
Daniel Sanchez, Miguel Angel Bautista, & Sergio Escalera. (2015). HuPBA 8k+: Dataset and ECOC-GraphCut based Segmentation of Human Limbs. NEUCOM - Neurocomputing, 150(A), 173–188.
Abstract: Human multi-limb segmentation in RGB images has attracted a lot of interest in the research community because of the huge amount of possible applications in fields like Human-Computer Interaction, Surveillance, eHealth, or Gaming. Nevertheless, human multi-limb segmentation is a very hard task because of the changes in appearance produced by different points of view, clothing, lighting conditions, occlusions, and number of articulations of the human body. Furthermore, this huge pose variability makes the availability of large annotated datasets difficult. In this paper, we introduce the HuPBA8k+ dataset. The dataset contains more than 8000 labeled frames at pixel precision, including more than 120000 manually labeled samples of 14 different limbs. For completeness, the dataset is also labeled at frame-level with action annotations drawn from an 11 action dictionary which includes both single person actions and person-person interactive actions. Furthermore, we also propose a two-stage approach for the segmentation of human limbs. In a first stage, human limbs are trained using cascades of classifiers to be split in a tree-structure way, which is included in an Error-Correcting Output Codes (ECOC) framework to define a body-like probability map. This map is used to obtain a binary mask of the subject by means of GMM color modelling and GraphCuts theory. In a second stage, we embed a similar tree-structure in an ECOC framework to build a more accurate set of limb-like probability maps within the segmented user mask, that are fed to a multi-label GraphCut procedure to obtain final multi-limb segmentation. The methodology is tested on the novel HuPBA8k+ dataset, showing performance improvements in comparison to state-of-the-art approaches. In addition, a baseline of standard action recognition methods for the 11 actions categories of the novel dataset is also provided.
Keywords: Human limb segmentation; ECOC; Graph-Cuts
|
|
|
Eloi Puertas, Miguel Angel Bautista, Daniel Sanchez, Sergio Escalera, & Oriol Pujol. (2014). Learning to Segment Humans by Stacking their Body Parts,. In ECCV Workshop on ChaLearn Looking at People (Vol. 8925, pp. 685–697). LNCS.
Abstract: Human segmentation in still images is a complex task due to the wide range of body poses and drastic changes in environmental conditions. Usually, human body segmentation is treated in a two-stage fashion. First, a human body part detection step is performed, and then, human part detections are used as prior knowledge to be optimized by segmentation strategies. In this paper, we present a two-stage scheme based on Multi-Scale Stacked Sequential Learning (MSSL). We define an extended feature set by stacking a multi-scale decomposition of body
part likelihood maps. These likelihood maps are obtained in a first stage
by means of a ECOC ensemble of soft body part detectors. In a second stage, contextual relations of part predictions are learnt by a binary classifier, obtaining an accurate body confidence map. The obtained confidence map is fed to a graph cut optimization procedure to obtain the final segmentation. Results show improved segmentation when MSSL is included in the human segmentation pipeline.
Keywords: Human body segmentation; Stacked Sequential Learning
|
|
|
Miguel Oliveira, Angel Sappa, & Victor Santos. (2015). A probabilistic approach for color correction in image mosaicking applications. TIP - IEEE Transactions on Image Processing, 14(2), 508–523.
Abstract: Image mosaicking applications require both geometrical and photometrical registrations between the images that compose the mosaic. This paper proposes a probabilistic color correction algorithm for correcting the photometrical disparities. First, the image to be color corrected is segmented into several regions using mean shift. Then, connected regions are extracted using a region fusion algorithm. Local joint image histograms of each region are modeled as collections of truncated Gaussians using a maximum likelihood estimation procedure. Then, local color palette mapping functions are computed using these sets of Gaussians. The color correction is performed by applying those functions to all the regions of the image. An extensive comparison with ten other state of the art color correction algorithms is presented, using two different image pair data sets. Results show that the proposed approach obtains the best average scores in both data sets and evaluation metrics and is also the most robust to failures.
Keywords: Color correction; image mosaicking; color transfer; color palette mapping functions
|
|
|
G. Zahnd, Simone Balocco, A. Serusclat, P. Moulin, M. Orkisz, & D. Vray. (2015). Progressive attenuation of the longitudinal kinetics in the common carotid artery: preliminary in vivo assessment Ultrasound in Medicine and Biology. UMB - Ultrasound in Medicine and Biology, 41(1), 339–345.
Abstract: Longitudinal kinetics (LOKI) of the arterial wall consists of the shearing motion of the intima-media complex over the adventitia layer in the direction parallel to the blood flow during the cardiac cycle. The aim of this study was to investigate the local variability of LOKI amplitude along the length of the vessel. By use of a previously validated motion-estimation framework, 35 in vivo longitudinal B-mode ultrasound cine loops of healthy common carotid arteries were analyzed. Results indicated that LOKI amplitude is progressively attenuated along the length of the artery, as it is larger in regions located on the proximal side of the image (i.e., toward the heart) and smaller in regions located on the distal side of the image (i.e., toward the head), with an average attenuation coefficient of -2.5 ± 2.0%/mm. Reported for the first time in this study, this phenomenon is likely to be of great importance in improving understanding of atherosclerosis mechanisms, and has the potential to be a novel index of arterial stiffness.
Keywords: Arterial stiffness; Atherosclerosis; Common carotid artery; Longitudinal kinetics; Motion tracking; Ultrasound imaging
|
|
|
G. Lisanti, I. Masi, Andrew Bagdanov, & Alberto del Bimbo. (2015). Person Re-identification by Iterative Re-weighted Sparse Ranking. TPAMI - IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(8), 1629–1642.
Abstract: In this paper we introduce a method for person re-identification based on discriminative, sparse basis expansions of targets in terms of a labeled gallery of known individuals. We propose an iterative extension to sparse discriminative classifiers capable of ranking many candidate targets. The approach makes use of soft- and hard- re-weighting to redistribute energy among the most relevant contributing elements and to ensure that the best candidates are ranked at each iteration. Our approach also leverages a novel visual descriptor which we show to be discriminative while remaining robust to pose and illumination variations. An extensive comparative evaluation is given demonstrating that our approach achieves state-of-the-art performance on single- and multi-shot person re-identification scenarios on the VIPeR, i-LIDS, ETHZ, and CAVIAR4REID datasets. The combination of our descriptor and iterative sparse basis expansion improves state-of-the-art rank-1 performance by six percentage points on VIPeR and by 20 on CAVIAR4REID compared to other methods with a single gallery image per person. With multiple gallery and probe images per person our approach improves by 17 percentage points the state-of-the-art on i-LIDS and by 72 on CAVIAR4REID at rank-1. The approach is also quite efficient, capable of single-shot person re-identification over galleries containing hundreds of individuals at about 30 re-identifications per second.
|
|
|
Marc Bolaños, Maite Garolera, & Petia Radeva. (2014). Video Segmentation of Life-Logging Videos. In 8th Conference on Articulated Motion and Deformable Objects (Vol. 8563, pp. 1–9).
|
|
|
Debora Gil, David Roche, Agnes Borras, & Jesus Giraldo. (2015). Terminating Evolutionary Algorithms at their Steady State. COA - Computational Optimization and Applications, 61(2), 489–515.
Abstract: Assessing the reliability of termination conditions for evolutionary algorithms (EAs) is of prime importance. An erroneous or weak stop criterion can negatively affect both the computational effort and the final result. We introduce a statistical framework for assessing whether a termination condition is able to stop an EA at its steady state, so that its results can not be improved anymore. We use a regression model in order to determine the requirements ensuring that a measure derived from EA evolving population is related to the distance to the optimum in decision variable space. Our framework is analyzed across 24 benchmark test functions and two standard termination criteria based on function fitness value in objective function space and EA population decision variable space distribution for the differential evolution (DE) paradigm. Results validate our framework as a powerful tool for determining the capability of a measure for terminating EA and the results also identify the decision variable space distribution as the best-suited for accurately terminating DE in real-world applications.
Keywords: Evolutionary algorithms; Termination condition; Steady state; Differential evolution
|
|
|
Francisco Blanco, Felipe Lumbreras, Joan Serrat, Roswitha Siener, Silvia Serranti, Giuseppe Bonifazi, et al. (2014). Taking advantage of Hyperspectral Imaging classification of urinary stones against conventional IR Spectroscopy. JBiO - Journal of Biomedical Optics, 19(12), 126004–1 - 126004–9.
Abstract: The analysis of urinary stones is mandatory for the best management of the disease after the stone passage in order to prevent further stone episodes. Thus the use of an appropriate methodology for an individualized stone analysis becomes a key factor for giving the patient the most suitable treatment. A recently developed hyperspectral imaging methodology, based on pixel-to-pixel analysis of near-infrared spectral images, is compared to the reference technique in stone analysis, infrared (IR) spectroscopy. The developed classification model yields >90% correct classification rate when compared to IR and is able to precisely locate stone components within the structure of the stone with a 15 µm resolution. Due to the little sample pretreatment, low analysis time, good performance of the model, and the automation of the measurements, they become analyst independent; this methodology can be considered to become a routine analysis for clinical laboratories.
|
|
|
Lluis Pere de las Heras, Oriol Ramos Terrades, Sergi Robles, & Gemma Sanchez. (2015). CVC-FP and SGT: a new database for structural floor plan analysis and its groundtruthing tool. IJDAR - International Journal on Document Analysis and Recognition, 18(1), 15–30.
Abstract: Recent results on structured learning methods have shown the impact of structural information in a wide range of pattern recognition tasks. In the field of document image analysis, there is a long experience on structural methods for the analysis and information extraction of multiple types of documents. Yet, the lack of conveniently annotated and free access databases has not benefited the progress in some areas such as technical drawing understanding. In this paper, we present a floor plan database, named CVC-FP, that is annotated for the architectural objects and their structural relations. To construct this database, we have implemented a groundtruthing tool, the SGT tool, that allows to make specific this sort of information in a natural manner. This tool has been made for general purpose groundtruthing: It allows to define own object classes and properties, multiple labeling options are possible, grants the cooperative work, and provides user and version control. We finally have collected some of the recent work on floor plan interpretation and present a quantitative benchmark for this database. Both CVC-FP database and the SGT tool are freely released to the research community to ease comparisons between methods and boost reproducible research.
|
|
|
Miguel Angel Bautista, Antonio Hernandez, Sergio Escalera, Laura Igual, Oriol Pujol, Josep Moya, et al. (2016). A Gesture Recognition System for Detecting Behavioral Patterns of ADHD. TSMCB - IEEE Transactions on System, Man and Cybernetics, Part B, 46(1), 136–147.
Abstract: We present an application of gesture recognition using an extension of Dynamic Time Warping (DTW) to recognize behavioural patterns of Attention Deficit Hyperactivity Disorder (ADHD). We propose an extension of DTW using one-class classifiers in order to be able to encode the variability of a gesture category, and thus, perform an alignment between a gesture sample and a gesture class. We model the set of gesture samples of a certain gesture category using either GMMs or an approximation of Convex Hulls. Thus, we add a theoretical contribution to classical warping path in DTW by including local modeling of intra-class gesture variability. This methodology is applied in a clinical context, detecting a group of ADHD behavioural patterns defined by experts in psychology/psychiatry, to provide support to clinicians in the diagnose procedure. The proposed methodology is tested on a novel multi-modal dataset (RGB plus Depth) of ADHD children recordings with behavioural patterns. We obtain satisfying results when compared to standard state-of-the-art approaches in the DTW context.
Keywords: Gesture Recognition; ADHD; Gaussian Mixture Models; Convex Hulls; Dynamic Time Warping; Multi-modal RGB-Depth data
|
|
|
Mikhail Mozerov, & Joost Van de Weijer. (2015). Accurate stereo matching by two step global optimization. TIP - IEEE Transactions on Image Processing, 24(3), 1153–1163.
Abstract: In stereo matching cost filtering methods and energy minimization algorithms are considered as two different techniques. Due to their global extend energy minimization methods obtain good stereo matching results. However, they tend to fail in occluded regions, in which cost filtering approaches obtain better results. In this paper we intend to combine both approaches with the aim to improve overall stereo matching results. We show that a global optimization with a fully connected model can be solved by cost fil tering methods. Based on this observation we propose to perform stereo matching as a two-step energy minimization algorithm. We consider two MRF models: a fully connected model defined on the complete set of pixels in an image and a conventional locally connected model. We solve the energy minimization problem for the fully connected model, after which the marginal function of the solution is used as the unary potential in the locally connected MRF model. Experiments on the Middlebury stereo datasets show that the proposed method achieves state-of-the-arts results.
|
|
|
Alicia Fornes, V.C.Kieu, M. Visani, N.Journet, & Anjan Dutta. (2014). The ICDAR/GREC 2013 Music Scores Competition: Staff Removal. In B.Lamiroy, & J.-M. Ogier (Eds.), Graphics Recognition. Current Trends and Challenges (Vol. 8746, pp. 207–220). LNCS. Springer Berlin Heidelberg.
Abstract: The first competition on music scores that was organized at ICDAR and GREC in 2011 awoke the interest of researchers, who participated in both staff removal and writer identification tasks. In this second edition, we focus on the staff removal task and simulate a real case scenario concerning old and degraded music scores. For this purpose, we have generated a new set of semi-synthetic images using two degradation models that we previously introduced: local noise and 3D distortions. In this extended paper we provide an extended description of the dataset, degradation models, evaluation metrics, the participant’s methods and the obtained results that could not be presented at ICDAR and GREC proceedings due to page limitations.
Keywords: Competition; Graphics recognition; Music scores; Writer identification; Staff removal
|
|
|
G.Thorvaldsen, Joana Maria Pujadas-Mora, T.Andersen, L.Eikvil, Josep Llados, Alicia Fornes, et al. (2015). A Tale of two Transcriptions. Historical Life Course Studies, 1–19.
Abstract: non-indexed
This article explains how two projects implement semi-automated transcription routines: for census sheets in Norway and marriage protocols from Barcelona. The Spanish system was created to transcribe the marriage license books from 1451 to 1905 for the Barcelona area; one of the world’s longest series of preserved vital records. Thus, in the Project “Five Centuries of Marriages” (5CofM) at the Autonomous University of Barcelona’s Center for Demographic Studies, the Barcelona Historical Marriage Database has been built. More than 600,000 records were transcribed by 150 transcribers working online. The Norwegian material is cross-sectional as it is the 1891 census, recorded on one sheet per person. This format and the underlining of keywords for several variables made it more feasible to semi-automate data entry than when many persons are listed on the same page. While Optical Character Recognition (OCR) for printed text is scientifically mature, computer vision research is now focused on more difficult problems such as handwriting recognition. In the marriage project, document analysis methods have been proposed to automatically recognize the marriage licenses. Fully automatic recognition is still a challenge, but some promising results have been obtained. In Spain, Norway and elsewhere the source material is available as scanned pictures on the Internet, opening up the possibility for further international cooperation concerning automating the transcription of historic source materials. Like what is being done in projects to digitize printed materials, the optimal solution is likely to be a combination of manual transcription and machine-assisted recognition also for hand-written sources.
Keywords: Nominative Sources; Census; Vital Records; Computer Vision; Optical Character Recognition; Word Spotting
|
|