|
Francesco Ciompi, Oriol Pujol, Carlo Gatta, Oriol Rodriguez-Leor, J. Mauri, & Petia Radeva. (2010). Fusing in-vitro and in-vivo intravascular ultrasound data for plaque characterization. IJCI - International Journal of Cardiovascular Imaging, 26(7), 763–779.
Abstract: Accurate detection of in-vivo vulnerable plaque in coronary arteries is still an open problem. Recent studies show that it is highly related to tissue structure and composition. Intravascular Ultrasound (IVUS) is a powerful imaging technique that gives a detailed cross-sectional image of the vessel, allowing to explore arteries morphology. IVUS data validation is usually performed by comparing post-mortem (in-vitro) IVUS data and corresponding histological analysis of the tissue. The main drawback of this method is the few number of available case studies and validated data due to the complex procedure of histological analysis of the tissue. On the other hand, IVUS data from in-vivo cases is easy to obtain but it can not be histologically validated. In this work, we propose to enhance the in-vitro training data set by selectively including examples from in-vivo plaques. For this purpose, a Sequential Floating Forward Selection method is reformulated in the context of plaque characterization. The enhanced classifier performance is validated on in-vitro data set, yielding an overall accuracy of 91.59% in discriminating among fibrotic, lipidic and calcified plaques, while reducing the gap between in-vivo and in-vitro data analysis. Experimental results suggest that the obtained classifier could be properly applied on in-vivo plaque characterization and also demonstrate that the common hypothesis of assuming the difference between in-vivo and in-vitro as negligible is incorrect.
|
|
|
Sergio Escalera, Alicia Fornes, O. Pujol, Petia Radeva, Gemma Sanchez, & Josep Llados. (2009). Blurred Shape Model for Binary and Grey-level Symbol Recognition. PRL - Pattern Recognition Letters, 30(15), 1424–1433.
Abstract: Many symbol recognition problems require the use of robust descriptors in order to obtain rich information of the data. However, the research of a good descriptor is still an open issue due to the high variability of symbols appearance. Rotation, partial occlusions, elastic deformations, intra-class and inter-class variations, or high variability among symbols due to different writing styles, are just a few problems. In this paper, we introduce a symbol shape description to deal with the changes in appearance that these types of symbols suffer. The shape of the symbol is aligned based on principal components to make the recognition invariant to rotation and reflection. Then, we present the Blurred Shape Model descriptor (BSM), where new features encode the probability of appearance of each pixel that outlines the symbols shape. Moreover, we include the new descriptor in a system to deal with multi-class symbol categorization problems. Adaboost is used to train the binary classifiers, learning the BSM features that better split symbol classes. Then, the binary problems are embedded in an Error-Correcting Output Codes framework (ECOC) to deal with the multi-class case. The methodology is evaluated on different synthetic and real data sets. State-of-the-art descriptors and classifiers are compared, showing the robustness and better performance of the present scheme to classify symbols with high variability of appearance.
|
|
|
Mark Philip Philipsen, Jacob Velling Dueholm, Anders Jorgensen, Sergio Escalera, & Thomas B. Moeslund. (2018). Organ Segmentation in Poultry Viscera Using RGB-D. SENS - Sensors, 18(1), 117.
Abstract: We present a pattern recognition framework for semantic segmentation of visual structures, that is, multi-class labelling at pixel level, and apply it to the task of segmenting organs in the eviscerated viscera from slaughtered poultry in RGB-D images. This is a step towards replacing the current strenuous manual inspection at poultry processing plants. Features are extracted from feature maps such as activation maps from a convolutional neural network (CNN). A random forest classifier assigns class probabilities, which are further refined by utilizing context in a conditional random field. The presented method is compatible with both 2D and 3D features, which allows us to explore the value of adding 3D and CNN-derived features. The dataset consists of 604 RGB-D images showing 151 unique sets of eviscerated viscera from four different perspectives. A mean Jaccard index of 78.11% is achieved across the four classes of organs by using features derived from 2D, 3D and a CNN, compared to 74.28% using only basic 2D image features.
Keywords: semantic segmentation; RGB-D; random forest; conditional random field; 2D; 3D; CNN
|
|
|
Xavier Perez Sala, Sergio Escalera, Cecilio Angulo, & Jordi Gonzalez. (2014). A survey on model based approaches for 2D and 3D visual human pose recovery. SENS - Sensors, 14(3), 4189–4210.
Abstract: Human Pose Recovery has been studied in the field of Computer Vision for the last 40 years. Several approaches have been reported, and significant improvements have been obtained in both data representation and model design. However, the problem of Human Pose Recovery in uncontrolled environments is far from being solved. In this paper, we define a general taxonomy to group model based approaches for Human Pose Recovery, which is composed of five main modules: appearance, viewpoint, spatial relations, temporal consistence, and behavior. Subsequently, a methodological comparison is performed following the proposed taxonomy, evaluating current SoA approaches in the aforementioned five group categories. As a result of this comparison, we discuss the main advantages and drawbacks of the reviewed literature.
Keywords: human pose recovery; human body modelling; behavior analysis; computer vision
|
|
|
Antonio Hernandez, Miguel Reyes, Victor Ponce, & Sergio Escalera. (2012). GrabCut-Based Human Segmentation in Video Sequences. SENS - Sensors, 12(11), 15376–15393.
Abstract: In this paper, we present a fully-automatic Spatio-Temporal GrabCut human segmentation methodology that combines tracking and segmentation. GrabCut initialization is performed by a HOG-based subject detection, face detection, and skin color model. Spatial information is included by Mean Shift clustering whereas temporal coherence is considered by the historical of Gaussian Mixture Models. Moreover, full face and pose recovery is obtained by combining human segmentation with Active Appearance Models and Conditional Random Fields. Results over public datasets and in a new Human Limb dataset show a robust segmentation and recovery of both face and pose using the presented methodology.
Keywords: segmentation; human pose recovery; GrabCut; GraphCut; Active Appearance Models; Conditional Random Field
|
|