toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Alejandro Cartas; Petia Radeva; Mariella Dimiccoli edit  url
doi  openurl
  Title Activities of Daily Living Monitoring via a Wearable Camera: Toward Real-World Applications Type Journal Article
  Year 2020 Publication IEEE Access Abbreviated Journal ACCESS  
  Volume 8 Issue Pages 77344 - 77363  
  Keywords  
  Abstract Activity recognition from wearable photo-cameras is crucial for lifestyle characterization and health monitoring. However, to enable its wide-spreading use in real-world applications, a high level of generalization needs to be ensured on unseen users. Currently, state-of-the-art methods have been tested only on relatively small datasets consisting of data collected by a few users that are partially seen during training. In this paper, we built a new egocentric dataset acquired by 15 people through a wearable photo-camera and used it to test the generalization capabilities of several state-of-the-art methods for egocentric activity recognition on unseen users and daily image sequences. In addition, we propose several variants to state-of-the-art deep learning architectures, and we show that it is possible to achieve 79.87% accuracy on users unseen during training. Furthermore, to show that the proposed dataset and approach can be useful in real-world applications, where data can be acquired by different wearable cameras and labeled data are scarcely available, we employed a domain adaptation strategy on two egocentric activity recognition benchmark datasets. These experiments show that the model learned with our dataset, can easily be transferred to other domains with a very small amount of labeled data. Taken together, those results show that activity recognition from wearable photo-cameras is mature enough to be tested in real-world applications.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes (up) MILAB; no proj Approved no  
  Call Number Admin @ si @ CRD2020 Serial 3436  
Permanent link to this record
 

 
Author Eduardo Aguilar; Petia Radeva edit  url
openurl 
  Title Uncertainty-aware integration of local and flat classifiers for food recognition Type Journal Article
  Year 2020 Publication Pattern Recognition Letters Abbreviated Journal PRL  
  Volume 136 Issue Pages 237-243  
  Keywords  
  Abstract Food image recognition has recently attracted the attention of many researchers, due to the challenging problem it poses, the ease collection of food images, and its numerous applications to health and leisure. In real applications, it is necessary to analyze and recognize thousands of different foods. For this purpose, we propose a novel prediction scheme based on a class hierarchy that considers local classifiers, in addition to a flat classifier. In order to make a decision about which approach to use, we define different criteria that take into account both the analysis of the Epistemic Uncertainty estimated from the ‘children’ classifiers and the prediction from the ‘parent’ classifier. We evaluate our proposal using three Uncertainty estimation methods, tested on two public food datasets. The results show that the proposed method reduces parent-child error propagation in hierarchical schemes and improves classification results compared to the single flat classifier, meanwhile maintains good performance regardless the Uncertainty estimation method chosen.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes (up) MILAB; no proj Approved no  
  Call Number Admin @ si @ AgR2020 Serial 3525  
Permanent link to this record
 

 
Author Giuseppe Pezzano; Vicent Ribas Ripoll; Petia Radeva edit  url
openurl 
  Title CoLe-CNN: Context-learning convolutional neural network with adaptive loss function for lung nodule segmentation Type Journal Article
  Year 2021 Publication Computer Methods and Programs in Biomedicine Abbreviated Journal CMPB  
  Volume 198 Issue Pages 105792  
  Keywords  
  Abstract Background and objective:An accurate segmentation of lung nodules in computed tomography images is a crucial step for the physical characterization of the tumour. Being often completely manually accomplished, nodule segmentation turns to be a tedious and time-consuming procedure and this represents a high obstacle in clinical practice. In this paper, we propose a novel Convolutional Neural Network for nodule segmentation that combines a light and efficient architecture with innovative loss function and segmentation strategy. Methods:In contrast to most of the standard end-to-end architectures for nodule segmentation, our network learns the context of the nodules by producing two masks representing all the background and secondary-important elements in the Computed Tomography scan. The nodule is detected by subtracting the context from the original scan image. Additionally, we introduce an asymmetric loss function that automatically compensates for potential errors in the nodule annotations. We trained and tested our Neural Network on the public LIDC-IDRI database, compared it with the state of the art and run a pseudo-Turing test between four radiologists and the network. Results:The results proved that the behaviour of the algorithm is very near to the human performance and its segmentation masks are almost indistinguishable from the ones made by the radiologists. Our method clearly outperforms the state of the art on CT nodule segmentation in terms of F1 score and IoU of and respectively. Conclusions: The main structure of the network ensures all the properties of the UNet architecture, while the Multi Convolutional Layers give a more accurate pattern recognition. The newly adopted solutions also increase the details on the border of the nodule, even under the noisiest conditions. This method can be applied now for single CT slice nodule segmentation and it represents a starting point for the future development of a fully automatic 3D segmentation software.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes (up) MILAB; no proj Approved no  
  Call Number Admin @ si @ PRR2021 Serial 3530  
Permanent link to this record
 

 
Author Alina Matei; Andreea Glavan; Petia Radeva; Estefania Talavera edit  url
doi  openurl
  Title Towards Eating Habits Discovery in Egocentric Photo-Streams Type Journal Article
  Year 2021 Publication IEEE Access Abbreviated Journal ACCESS  
  Volume 9 Issue Pages 17495-17506  
  Keywords  
  Abstract Eating habits are learned throughout the early stages of our lives. However, it is not easy to be aware of how our food-related routine affects our healthy living. In this work, we address the unsupervised discovery of nutritional habits from egocentric photo-streams. We build a food-related behavioral pattern discovery model, which discloses nutritional routines from the activities performed throughout the days. To do so, we rely on Dynamic-Time-Warping for the evaluation of similarity among the collected days. Within this framework, we present a simple, but robust and fast novel classification pipeline that outperforms the state-of-the-art on food-related image classification with a weighted accuracy and F-score of 70% and 63%, respectively. Later, we identify days composed of nutritional activities that do not describe the habits of the person as anomalies in the daily life of the user with the Isolation Forest method. Furthermore, we show an application for the identification of food-related scenes when the camera wearer eats in isolation. Results have shown the good performance of the proposed model and its relevance to visualize the nutritional habits of individuals.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes (up) MILAB; no proj Approved no  
  Call Number Admin @ si @ MGR2021 Serial 3637  
Permanent link to this record
 

 
Author Michal Drozdzal; Santiago Segui; Carolina Malagelada; Fernando Azpiroz; Petia Radeva edit   pdf
doi  openurl
  Title Adaptable image cuts for motility inspection using WCE Type Journal Article
  Year 2013 Publication Computerized Medical Imaging and Graphics Abbreviated Journal CMIG  
  Volume 37 Issue 1 Pages 72-80  
  Keywords  
  Abstract The Wireless Capsule Endoscopy (WCE) technology allows the visualization of the whole small intestine tract. Since the capsule is freely moving, mainly by the means of peristalsis, the data acquired during the study gives a lot of information about the intestinal motility. However, due to: (1) huge amount of frames, (2) complex intestinal scene appearance and (3) intestinal dynamics that make difficult the visualization of the small intestine physiological phenomena, the analysis of the WCE data requires computer-aided systems to speed up the analysis. In this paper, we propose an efficient algorithm for building a novel representation of the WCE video data, optimal for motility analysis and inspection. The algorithm transforms the 3D video data into 2D longitudinal view by choosing the most informative, from the intestinal motility point of view, part of each frame. This step maximizes the lumen visibility in its longitudinal extension. The task of finding “the best longitudinal view” has been defined as a cost function optimization problem which global minimum is obtained by using Dynamic Programming. Validation on both synthetic data and WCE data shows that the adaptive longitudinal view is a good alternative to the traditional motility analysis done by video analysis. The proposed novel data representation a new, holistic insight into the small intestine motility, allowing to easily define and analyze motility events that are difficult to spot by analyzing WCE video. Moreover, the visual inspection of small intestine motility is 4 times faster then by means of video skimming of the WCE.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes (up) MILAB; OR; 600.046; 605.203 Approved no  
  Call Number Admin @ si @ DSM2012 Serial 2151  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: