|
Records |
Links |
|
Author |
Mohammad Ali Bagheri; Gang Hu; Qigang Gao; Sergio Escalera |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
A Framework of Multi-Classifier Fusion for Human Action Recognition |
Type |
Conference Article |
|
Year |
2014 |
Publication |
22nd International Conference on Pattern Recognition |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
1260 - 1265 |
|
|
Keywords |
|
|
|
Abstract |
The performance of different action-recognition methods using skeleton joint locations have been recently studied by several computer vision researchers. However, the potential improvement in classification through classifier fusion by ensemble-based methods has remained unattended. In this work, we evaluate the performance of an ensemble of five action learning techniques, each performing the recognition task from a different perspective. The underlying rationale of the fusion approach is that different learners employ varying structures of input descriptors/features to be trained. These varying structures cannot be attached and used by a single learner. In addition, combining the outputs of several learners can reduce the risk of an unfortunate selection of a poorly performing learner. This leads to having a more robust and general-applicable framework. Also, we propose two simple, yet effective, action description techniques. In order to improve the recognition performance, a powerful combination strategy is utilized based on the Dempster-Shafer theory, which can effectively make use of diversity of base learners trained on different sources of information. The recognition results of the individual classifiers are compared with those obtained from fusing the classifiers' output, showing advanced performance of the proposed methodology. |
|
|
Address |
Stockholm; Sweden; August 2014 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1051-4651 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICPR |
|
|
Notes ![sorted by Notes field, ascending order (up)](img/sort_asc.gif) |
HuPBA;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ BHG2014 |
Serial |
2446 |
|
Permanent link to this record |
|
|
|
|
Author |
Frederic Sampedro; Sergio Escalera; Anna Domenech; Ignasi Carrio |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Automatic Tumor Volume Segmentation in Whole-Body PET/CT Scans: A Supervised Learning Approach Source |
Type |
Journal Article |
|
Year |
2015 |
Publication |
Journal of Medical Imaging and Health Informatics |
Abbreviated Journal |
JMIHI |
|
|
Volume |
5 |
Issue |
2 |
Pages |
192-201 |
|
|
Keywords |
CONTEXTUAL CLASSIFICATION; PET/CT; SUPERVISED LEARNING; TUMOR SEGMENTATION; WHOLE BODY |
|
|
Abstract |
Whole-body 3D PET/CT tumoral volume segmentation provides relevant diagnostic and prognostic information in clinical oncology and nuclear medicine. Carrying out this procedure manually by a medical expert is time consuming and suffers from inter- and intra-observer variabilities. In this paper, a completely automatic approach to this task is presented. First, the problem is stated and described both in clinical and technological terms. Then, a novel supervised learning segmentation framework is introduced. The segmentation by learning approach is defined within a Cascade of Adaboost classifiers and a 3D contextual proposal of Multiscale Stacked Sequential Learning. Segmentation accuracy results on 200 Breast Cancer whole body PET/CT volumes show mean 49% sensitivity, 99.993% specificity and 39% Jaccard overlap Index, which represent good performance results both at the clinical and technological level. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes ![sorted by Notes field, ascending order (up)](img/sort_asc.gif) |
HuPBA;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ SED2015 |
Serial |
2584 |
|
Permanent link to this record |
|
|
|
|
Author |
Antonio Hernandez; Stan Sclaroff; Sergio Escalera |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Contextual rescoring for Human Pose Estimation |
Type |
Conference Article |
|
Year |
2014 |
Publication |
25th British Machine Vision Conference |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
A contextual rescoring method is proposed for improving the detection of body joints of a pictorial structure model for human pose estimation. A set of mid-level parts is incorporated in the model, and their detections are used to extract spatial and score-related features relative to other body joint hypotheses. A technique is proposed for the automatic discovery of a compact subset of poselets that covers a set of validation images
while maximizing precision. A rescoring mechanism is defined as a set-based boosting classifier that computes a new score for body joint detections, given its relationship to detections of other body joints and mid-level parts in the image. This new score complements the unary potential of a discriminatively trained pictorial structure model. Experiments on two benchmarks show performance improvements when considering the proposed mid-level image representation and rescoring approach in comparison with other pictorial structure-based approaches. |
|
|
Address |
Nottingham; UK; September 2013 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
BMVC |
|
|
Notes ![sorted by Notes field, ascending order (up)](img/sort_asc.gif) |
HuPBA;MILAB |
Approved |
no |
|
|
Call Number |
HSE2014 |
Serial |
2525 |
|
Permanent link to this record |
|
|
|
|
Author |
Frederic Sampedro; Anna Domenech; Sergio Escalera |
![goto web page url](img/www.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Static and dynamic computational cancer spread quantification in whole body FDG-PET/CT scans |
Type |
Journal Article |
|
Year |
2014 |
Publication |
Journal of Medical Imaging and Health Informatics |
Abbreviated Journal |
JMIHI |
|
|
Volume |
4 |
Issue |
6 |
Pages |
825-831 |
|
|
Keywords |
CANCER SPREAD; COMPUTER AIDED DIAGNOSIS; MEDICAL IMAGING; TUMOR QUANTIFICATION |
|
|
Abstract |
In this work we address the computational cancer spread quantification scenario in whole body FDG-PET/CT scans. At the static level, this setting can be modeled as a clustering problem on the set of 3D connected components of the whole body PET tumoral segmentation mask carried out by nuclear medicine physicians. At the dynamic level, and ad-hoc algorithm is proposed in order to quantify the cancer spread time evolution which, when combined with other existing indicators, gives rise to the metabolic tumor volume-aggressiveness-spread time evolution chart, a novel tool that we claim that would prove useful in nuclear medicine and oncological clinical or research scenarios. Good performance results of the proposed methodologies both at the clinical and technological level are shown using a dataset of 48 segmented whole body FDG-PET/CT scans. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes ![sorted by Notes field, ascending order (up)](img/sort_asc.gif) |
HuPBA;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ SDE2014b |
Serial |
2548 |
|
Permanent link to this record |
|
|
|
|
Author |
Alvaro Cepero; Albert Clapes; Sergio Escalera |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Automatic non-verbal communication skills analysis: a quantitative evaluation |
Type |
Journal Article |
|
Year |
2015 |
Publication |
AI Communications |
Abbreviated Journal |
AIC |
|
|
Volume |
28 |
Issue |
1 |
Pages |
87-101 |
|
|
Keywords |
Social signal processing; human behavior analysis; multi-modal data description; multi-modal data fusion; non-verbal communication analysis; e-Learning |
|
|
Abstract |
The oral communication competence is defined on the top of the most relevant skills for one's professional and personal life. Because of the importance of communication in our activities of daily living, it is crucial to study methods to evaluate and provide the necessary feedback that can be used in order to improve these communication capabilities and, therefore, learn how to express ourselves better. In this work, we propose a system capable of evaluating quantitatively the quality of oral presentations in an automatic fashion. The system is based on a multi-modal RGB, depth, and audio data description and a fusion approach in order to recognize behavioral cues and train classifiers able to eventually predict communication quality levels. The performance of the proposed system is tested on a novel dataset containing Bachelor thesis' real defenses, presentations from an 8th semester Bachelor courses, and Master courses' presentations at Universitat de Barcelona. Using as groundtruth the marks assigned by actual instructors, our system achieves high performance categorizing and ranking presentations by their quality, and also making real-valued mark predictions. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0921-7126 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes ![sorted by Notes field, ascending order (up)](img/sort_asc.gif) |
HUPBA;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ CCE2015 |
Serial |
2549 |
|
Permanent link to this record |
|
|
|
|
Author |
Frederic Sampedro; Sergio Escalera; Anna Puig |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Iterative Multiclass Multiscale Stacked Sequential Learning: definition and application to medical volume segmentation |
Type |
Journal Article |
|
Year |
2014 |
Publication |
Pattern Recognition Letters |
Abbreviated Journal |
PRL |
|
|
Volume |
46 |
Issue |
|
Pages |
1-10 |
|
|
Keywords |
Machine learning; Sequential learning; Multi-class problems; Contextual learning; Medical volume segmentation |
|
|
Abstract |
In this work we present the iterative multi-class multi-scale stacked sequential learning framework (IMMSSL), a novel learning scheme that is particularly suited for medical volume segmentation applications. This model exploits the inherent voxel contextual information of the structures of interest in order to improve its segmentation performance results. Without any feature set or learning algorithm prior assumption, the proposed scheme directly seeks to learn the contextual properties of a region from the predicted classifications of previous classifiers within an iterative scheme. Performance results regarding segmentation accuracy in three two-class and multi-class medical volume datasets show a significant improvement with respect to state of the art alternatives. Due to its easiness of implementation and its independence of feature space and learning algorithm, the presented machine learning framework could be taken into consideration as a first choice in complex volume segmentation scenarios. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes ![sorted by Notes field, ascending order (up)](img/sort_asc.gif) |
HuPBA;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ SEP2014 |
Serial |
2550 |
|
Permanent link to this record |
|
|
|
|
Author |
Frederic Sampedro; Sergio Escalera |
![download PDF file pdf](img/file_PDF.gif)
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Spatial codification of label predictions in Multi-scale Stacked Sequential Learning: A case study on multi-class medical volume segmentation |
Type |
Journal Article |
|
Year |
2015 |
Publication |
IET Computer Vision |
Abbreviated Journal |
IETCV |
|
|
Volume |
9 |
Issue |
3 |
Pages |
439 - 446 |
|
|
Keywords |
|
|
|
Abstract |
In this study, the authors propose the spatial codification of label predictions within the multi-scale stacked sequential learning (MSSL) framework, a successful learning scheme to deal with non-independent identically distributed data entries. After providing a motivation for this objective, they describe its theoretical framework based on the introduction of the blurred shape model as a smart descriptor to codify the spatial distribution of the predicted labels and define the new extended feature set for the second stacked classifier. They then particularise this scheme to be applied in volume segmentation applications. Finally, they test the implementation of the proposed framework in two medical volume segmentation datasets, obtaining significant performance improvements (with a 95% of confidence) in comparison to standard Adaboost classifier and classical MSSL approaches. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1751-9632 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes ![sorted by Notes field, ascending order (up)](img/sort_asc.gif) |
HuPBA;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ SaE2015 |
Serial |
2551 |
|
Permanent link to this record |
|
|
|
|
Author |
Daniel Sanchez; Miguel Angel Bautista; Sergio Escalera |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
HuPBA 8k+: Dataset and ECOC-GraphCut based Segmentation of Human Limbs |
Type |
Journal Article |
|
Year |
2015 |
Publication |
Neurocomputing |
Abbreviated Journal |
NEUCOM |
|
|
Volume |
150 |
Issue |
A |
Pages |
173–188 |
|
|
Keywords |
Human limb segmentation; ECOC; Graph-Cuts |
|
|
Abstract |
Human multi-limb segmentation in RGB images has attracted a lot of interest in the research community because of the huge amount of possible applications in fields like Human-Computer Interaction, Surveillance, eHealth, or Gaming. Nevertheless, human multi-limb segmentation is a very hard task because of the changes in appearance produced by different points of view, clothing, lighting conditions, occlusions, and number of articulations of the human body. Furthermore, this huge pose variability makes the availability of large annotated datasets difficult. In this paper, we introduce the HuPBA8k+ dataset. The dataset contains more than 8000 labeled frames at pixel precision, including more than 120000 manually labeled samples of 14 different limbs. For completeness, the dataset is also labeled at frame-level with action annotations drawn from an 11 action dictionary which includes both single person actions and person-person interactive actions. Furthermore, we also propose a two-stage approach for the segmentation of human limbs. In a first stage, human limbs are trained using cascades of classifiers to be split in a tree-structure way, which is included in an Error-Correcting Output Codes (ECOC) framework to define a body-like probability map. This map is used to obtain a binary mask of the subject by means of GMM color modelling and GraphCuts theory. In a second stage, we embed a similar tree-structure in an ECOC framework to build a more accurate set of limb-like probability maps within the segmented user mask, that are fed to a multi-label GraphCut procedure to obtain final multi-limb segmentation. The methodology is tested on the novel HuPBA8k+ dataset, showing performance improvements in comparison to state-of-the-art approaches. In addition, a baseline of standard action recognition methods for the 11 actions categories of the novel dataset is also provided. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes ![sorted by Notes field, ascending order (up)](img/sort_asc.gif) |
HuPBA;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ SBE2015 |
Serial |
2552 |
|
Permanent link to this record |
|
|
|
|
Author |
Eloi Puertas; Miguel Angel Bautista; Daniel Sanchez; Sergio Escalera; Oriol Pujol |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Learning to Segment Humans by Stacking their Body Parts, |
Type |
Conference Article |
|
Year |
2014 |
Publication |
ECCV Workshop on ChaLearn Looking at People |
Abbreviated Journal |
|
|
|
Volume |
8925 |
Issue |
|
Pages |
685-697 |
|
|
Keywords |
Human body segmentation; Stacked Sequential Learning |
|
|
Abstract |
Human segmentation in still images is a complex task due to the wide range of body poses and drastic changes in environmental conditions. Usually, human body segmentation is treated in a two-stage fashion. First, a human body part detection step is performed, and then, human part detections are used as prior knowledge to be optimized by segmentation strategies. In this paper, we present a two-stage scheme based on Multi-Scale Stacked Sequential Learning (MSSL). We define an extended feature set by stacking a multi-scale decomposition of body
part likelihood maps. These likelihood maps are obtained in a first stage
by means of a ECOC ensemble of soft body part detectors. In a second stage, contextual relations of part predictions are learnt by a binary classifier, obtaining an accurate body confidence map. The obtained confidence map is fed to a graph cut optimization procedure to obtain the final segmentation. Results show improved segmentation when MSSL is included in the human segmentation pipeline. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ECCVW |
|
|
Notes ![sorted by Notes field, ascending order (up)](img/sort_asc.gif) |
HuPBA;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ PBS2014 |
Serial |
2553 |
|
Permanent link to this record |
|
|
|
|
Author |
Antonio Hernandez |
![find book details (via ISBN) isbn](img/isbn.gif)
|
|
Title |
From pixels to gestures: learning visual representations for human analysis in color and depth data sequences |
Type |
Book Whole |
|
Year |
2015 |
Publication |
PhD Thesis, Universitat de Barcelona-CVC |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
The visual analysis of humans from images is an important topic of interest due to its relevance to many computer vision applications like pedestrian detection, monitoring and surveillance, human-computer interaction, e-health or content-based image retrieval, among others.
In this dissertation we are interested in learning different visual representations of the human body that are helpful for the visual analysis of humans in images and video sequences. To that end, we analyze both RGB and depth image modalities and address the problem from three different research lines, at different levels of abstraction; from pixels to gestures: human segmentation, human pose estimation and gesture recognition.
First, we show how binary segmentation (object vs. background) of the human body in image sequences is helpful to remove all the background clutter present in the scene. The presented method, based on Graph cuts optimization, enforces spatio-temporal consistency of the produced segmentation masks among consecutive frames. Secondly, we present a framework for multi-label segmentation for obtaining much more detailed segmentation masks: instead of just obtaining a binary representation separating the human body from the background, finer segmentation masks can be obtained separating the different body parts.
At a higher level of abstraction, we aim for a simpler yet descriptive representation of the human body. Human pose estimation methods usually rely on skeletal models of the human body, formed by segments (or rectangles) that represent the body limbs, appropriately connected following the kinematic constraints of the human body. In practice, such skeletal models must fulfill some constraints in order to allow for efficient inference, while actually limiting the expressiveness of the model. In order to cope with this, we introduce a top-down approach for predicting the position of the body parts in the model, using a mid-level part representation based on Poselets.
Finally, we propose a framework for gesture recognition based on the bag of visual words framework. We leverage the benefits of RGB and depth image modalities by combining modality-specific visual vocabularies in a late fusion fashion. A new rotation-variant depth descriptor is presented, yielding better results than other state-of-the-art descriptors. Moreover, spatio-temporal pyramids are used to encode rough spatial and temporal structure. In addition, we present a probabilistic reformulation of Dynamic Time Warping for gesture segmentation in video sequences. A Gaussian-based probabilistic model of a gesture is learnt, implicitly encoding possible deformations in both spatial and time domains. |
|
|
Address |
January 2015 |
|
|
Corporate Author |
|
Thesis |
Ph.D. thesis |
|
|
Publisher |
Ediciones Graficas Rey |
Place of Publication |
|
Editor |
Sergio Escalera;Stan Sclaroff |
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
978-84-940902-0-2 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes ![sorted by Notes field, ascending order (up)](img/sort_asc.gif) |
HuPBA;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ Her2015 |
Serial |
2576 |
|
Permanent link to this record |
|
|
|
|
Author |
Frederic Sampedro; Anna Domenech; Sergio Escalera; Ignasi Carrio |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Deriving global quantitative tumor response parameters from 18F-FDG PET-CT scans in patients with non-Hodgkins lymphoma |
Type |
Journal Article |
|
Year |
2015 |
Publication |
Nuclear Medicine Communications |
Abbreviated Journal |
NMC |
|
|
Volume |
36 |
Issue |
4 |
Pages |
328-333 |
|
|
Keywords |
|
|
|
Abstract |
OBJECTIVES:
The aim of the study was to address the need for quantifying the global cancer time evolution magnitude from a pair of time-consecutive positron emission tomography-computed tomography (PET-CT) scans. In particular, we focus on the computation of indicators using image-processing techniques that seek to model non-Hodgkin's lymphoma (NHL) progression or response severity.
MATERIALS AND METHODS:
A total of 89 pairs of time-consecutive PET-CT scans from NHL patients were stored in a nuclear medicine station for subsequent analysis. These were classified by a consensus of nuclear medicine physicians into progressions, partial responses, mixed responses, complete responses, and relapses. The cases of each group were ordered by magnitude following visual analysis. Thereafter, a set of quantitative indicators designed to model the cancer evolution magnitude within each group were computed using semiautomatic and automatic image-processing techniques. Performance evaluation of the proposed indicators was measured by a correlation analysis with the expert-based visual analysis.
RESULTS:
The set of proposed indicators achieved Pearson's correlation results in each group with respect to the expert-based visual analysis: 80.2% in progressions, 77.1% in partial response, 68.3% in mixed response, 88.5% in complete response, and 100% in relapse. In the progression and mixed response groups, the proposed indicators outperformed the common indicators used in clinical practice [changes in metabolic tumor volume, mean, maximum, peak standardized uptake value (SUV mean, SUV max, SUV peak), and total lesion glycolysis] by more than 40%.
CONCLUSION:
Computing global indicators of NHL response using PET-CT imaging techniques offers a strong correlation with the associated expert-based visual analysis, motivating the future incorporation of such quantitative and highly observer-independent indicators in oncological decision making or treatment response evaluation scenarios. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes ![sorted by Notes field, ascending order (up)](img/sort_asc.gif) |
HuPBA;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ SDE2015 |
Serial |
2605 |
|
Permanent link to this record |
|
|
|
|
Author |
Firat Ismailoglu; Ida G. Sprinkhuizen-Kuyper; Evgueni Smirnov; Sergio Escalera; Ralf Peeters |
![goto web page url](img/www.gif)
![find book details (via ISBN) isbn](img/isbn.gif)
|
|
Title |
Fractional Programming Weighted Decoding for Error-Correcting Output Codes |
Type |
Conference Article |
|
Year |
2015 |
Publication |
Multiple Classifier Systems, Proceedings of 12th International Workshop , MCS 2015 |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
38-50 |
|
|
Keywords |
|
|
|
Abstract |
In order to increase the classification performance obtained using Error-Correcting Output Codes designs (ECOC), introducing weights in the decoding phase of the ECOC has attracted a lot of interest. In this work, we present a method for ECOC designs that focuses on increasing hypothesis margin on the data samples given a base classifier. While achieving this, we implicitly reward the base classifiers with high performance, whereas punish those with low performance. The resulting objective function is of the fractional programming type and we deal with this problem through the Dinkelbach’s Algorithm. The conducted tests over well known UCI datasets show that the presented method is superior to the unweighted decoding and that it outperforms the results of the state-of-the-art weighted decoding methods in most of the performed experiments. |
|
|
Address |
Gunzburg; Germany; June 2015 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer International Publishing |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
978-3-319-20247-1 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
MCS |
|
|
Notes ![sorted by Notes field, ascending order (up)](img/sort_asc.gif) |
HuPBA;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ ISS2015 |
Serial |
2601 |
|
Permanent link to this record |
|
|
|
|
Author |
Isabelle Guyon; Kristin Bennett; Gavin Cawley; Hugo Jair Escalante; Sergio Escalera; Tin Kam Ho; Nuria Macia; Bisakha Ray; Alexander Statnikov; Evelyne Viegas |
![goto web page url](img/www.gif)
|
|
Title |
Design of the 2015 ChaLearn AutoML Challenge |
Type |
Conference Article |
|
Year |
2015 |
Publication |
IEEE International Joint Conference on Neural Networks IJCNN2015 |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
ChaLearn is organizing for IJCNN 2015 an Automatic Machine Learning challenge (AutoML) to solve classification and regression problems from given feature representations, without any human intervention. This is a challenge with code
submission: the code submitted can be executed automatically on the challenge servers to train and test learning machines on new datasets. However, there is no obligation to submit code. Half of the prizes can be won by just submitting prediction results.
There are six rounds (Prep, Novice, Intermediate, Advanced, Expert, and Master) in which datasets of progressive difficulty are introduced (5 per round). There is no requirement to participate in previous rounds to enter a new round. The rounds alternate AutoML phases in which submitted code is “blind tested” on
datasets the participants have never seen before, and Tweakathon phases giving time (' 1 month) to the participants to improve their methods by tweaking their code on those datasets. This challenge will push the state-of-the-art in fully automatic machine learning on a wide range of problems taken from real world
applications. The platform will remain available beyond the termination of the challenge: http://codalab.org/AutoML |
|
|
Address |
Killarney; Ireland; July 2015 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
IJCNN |
|
|
Notes ![sorted by Notes field, ascending order (up)](img/sort_asc.gif) |
HuPBA;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ GBC2015a |
Serial |
2604 |
|
Permanent link to this record |
|
|
|
|
Author |
Frederic Sampedro; Sergio Escalera; Anna Domenech; Ignasi Carrio |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
A computational framework for cancer response assessment based on oncological PET-CT scans |
Type |
Journal Article |
|
Year |
2014 |
Publication |
Computers in Biology and Medicine |
Abbreviated Journal |
CBM |
|
|
Volume |
55 |
Issue |
|
Pages |
92–99 |
|
|
Keywords |
Computer aided diagnosis; Nuclear medicine; Machine learning; Image processing; Quantitative analysis |
|
|
Abstract |
In this work we present a comprehensive computational framework to help in the clinical assessment of cancer response from a pair of time consecutive oncological PET-CT scans. In this scenario, the design and implementation of a supervised machine learning system to predict and quantify cancer progression or response conditions by introducing a novel feature set that models the underlying clinical context is described. Performance results in 100 clinical cases (corresponding to 200 whole body PET-CT scans) in comparing expert-based visual analysis and classifier decision making show up to 70% accuracy within a completely automatic pipeline and 90% accuracy when providing the system with expert-guided PET tumor segmentation masks. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes ![sorted by Notes field, ascending order (up)](img/sort_asc.gif) |
HuPBA;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ SED2014 |
Serial |
2606 |
|
Permanent link to this record |
|
|
|
|
Author |
Andres Traumann; Gholamreza Anbarjafari; Sergio Escalera |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Accurate 3D Measurement Using Optical Depth Information |
Type |
Journal Article |
|
Year |
2015 |
Publication |
Electronic Letters |
Abbreviated Journal |
EL |
|
|
Volume |
51 |
Issue |
18 |
Pages |
1420-1422 |
|
|
Keywords |
|
|
|
Abstract |
A novel three-dimensional measurement technique is proposed. The methodology consists in mapping from the screen coordinates reported by the optical camera to the real world, and integrating distance gradients from the beginning to the end point, while also minimising the error through fitting pixel locations to a smooth curve. The results demonstrate accuracy of less than half a centimetre using Microsoft Kinect II. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes ![sorted by Notes field, ascending order (up)](img/sort_asc.gif) |
HuPBA;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ TAE2015 |
Serial |
2647 |
|
Permanent link to this record |