|
Frederic Sampedro, Sergio Escalera, Anna Domenech, & Ignasi Carrio. (2015). Automatic Tumor Volume Segmentation in Whole-Body PET/CT Scans: A Supervised Learning Approach Source. JMIHI - Journal of Medical Imaging and Health Informatics, 5(2), 192–201.
Abstract: Whole-body 3D PET/CT tumoral volume segmentation provides relevant diagnostic and prognostic information in clinical oncology and nuclear medicine. Carrying out this procedure manually by a medical expert is time consuming and suffers from inter- and intra-observer variabilities. In this paper, a completely automatic approach to this task is presented. First, the problem is stated and described both in clinical and technological terms. Then, a novel supervised learning segmentation framework is introduced. The segmentation by learning approach is defined within a Cascade of Adaboost classifiers and a 3D contextual proposal of Multiscale Stacked Sequential Learning. Segmentation accuracy results on 200 Breast Cancer whole body PET/CT volumes show mean 49% sensitivity, 99.993% specificity and 39% Jaccard overlap Index, which represent good performance results both at the clinical and technological level.
Keywords: CONTEXTUAL CLASSIFICATION; PET/CT; SUPERVISED LEARNING; TUMOR SEGMENTATION; WHOLE BODY
|
|
|
Mariella Dimiccoli, Marc Bolaños, Estefania Talavera, Maedeh Aghaei, Stavri G. Nikolov, & Petia Radeva. (2017). SR-Clustering: Semantic Regularized Clustering for Egocentric Photo Streams Segmentation. CVIU - Computer Vision and Image Understanding, 155, 55–69.
Abstract: While wearable cameras are becoming increasingly popular, locating relevant information in large unstructured collections of egocentric images is still a tedious and time consuming processes. This paper addresses the problem of organizing egocentric photo streams acquired by a wearable camera into semantically meaningful segments. First, contextual and semantic information is extracted for each image by employing a Convolutional Neural Networks approach. Later, by integrating language processing, a vocabulary of concepts is defined in a semantic space. Finally, by exploiting the temporal coherence in photo streams, images which share contextual and semantic attributes are grouped together. The resulting temporal segmentation is particularly suited for further analysis, ranging from activity and event recognition to semantic indexing and summarization. Experiments over egocentric sets of nearly 17,000 images, show that the proposed approach outperforms state-of-the-art methods.
|
|
|
Md. Mostafa Kamal Sarker, Hatem A. Rashwan, Farhan Akram, Estefania Talavera, Syeda Furruka Banu, Petia Radeva, et al. (2019). Recognizing Food Places in Egocentric Photo-Streams Using Multi-Scale Atrous Convolutional Networks and Self-Attention Mechanism. ACCESS - IEEE Access, 7, 39069–39082.
Abstract: Wearable sensors (e.g., lifelogging cameras) represent very useful tools to monitor people's daily habits and lifestyle. Wearable cameras are able to continuously capture different moments of the day of their wearers, their environment, and interactions with objects, people, and places reflecting their personal lifestyle. The food places where people eat, drink, and buy food, such as restaurants, bars, and supermarkets, can directly affect their daily dietary intake and behavior. Consequently, developing an automated monitoring system based on analyzing a person's food habits from daily recorded egocentric photo-streams of the food places can provide valuable means for people to improve their eating habits. This can be done by generating a detailed report of the time spent in specific food places by classifying the captured food place images to different groups. In this paper, we propose a self-attention mechanism with multi-scale atrous convolutional networks to generate discriminative features from image streams to recognize a predetermined set of food place categories. We apply our model on an egocentric food place dataset called “EgoFoodPlaces” that comprises of 43 392 images captured by 16 individuals using a lifelogging camera. The proposed model achieved an overall classification accuracy of 80% on the “EgoFoodPlaces” dataset, respectively, outperforming the baseline methods, such as VGG16, ResNet50, and InceptionV3.
|
|
|
Maedeh Aghaei, Mariella Dimiccoli, & Petia Radeva. (2016). Multi-face tracking by extended bag-of-tracklets in egocentric photo-streams. CVIU - Computer Vision and Image Understanding, 149, 146–156.
Abstract: Wearable cameras offer a hands-free way to record egocentric images of daily experiences, where social events are of special interest. The first step towards detection of social events is to track the appearance of multiple persons involved in them. In this paper, we propose a novel method to find correspondences of multiple faces in low temporal resolution egocentric videos acquired through a wearable camera. This kind of photo-stream imposes additional challenges to the multi-tracking problem with respect to conventional videos. Due to the free motion of the camera and to its low temporal resolution, abrupt changes in the field of view, in illumination condition and in the target location are highly frequent. To overcome such difficulties, we propose a multi-face tracking method that generates a set of tracklets through finding correspondences along the whole sequence for each detected face and takes advantage of the tracklets redundancy to deal with unreliable ones. Similar tracklets are grouped into the so called extended bag-of-tracklets (eBoT), which is aimed to correspond to a specific person. Finally, a prototype tracklet is extracted for each eBoT, where the occurred occlusions are estimated by relying on a new measure of confidence. We validated our approach over an extensive dataset of egocentric photo-streams and compared it to state of the art methods, demonstrating its effectiveness and robustness.
|
|
|
Alejandro Cartas, Juan Marin, Petia Radeva, & Mariella Dimiccoli. (2018). Batch-based activity recognition from egocentric photo-streams revisited. PAA - Pattern Analysis and Applications, 21(4), 953–965.
Abstract: Wearable cameras can gather large amounts of image data that provide rich visual information about the daily activities of the wearer. Motivated by the large number of health applications that could be enabled by the automatic recognition of daily activities, such as lifestyle characterization for habit improvement, context-aware personal assistance and tele-rehabilitation services, we propose a system to classify 21 daily activities from photo-streams acquired by a wearable photo-camera. Our approach combines the advantages of a late fusion ensemble strategy relying on convolutional neural networks at image level with the ability of recurrent neural networks to account for the temporal evolution of high-level features in photo-streams without relying on event boundaries. The proposed batch-based approach achieved an overall accuracy of 89.85%, outperforming state-of-the-art end-to-end methodologies. These results were achieved on a dataset consists of 44,902 egocentric pictures from three persons captured during 26 days in average.
Keywords: Egocentric vision; Lifelogging; Activity recognition; Deep learning; Recurrent neural networks
|
|