toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Md Mostafa Kamal Sarker; Hatem A. Rashwan; Farhan Akram; Vivek Kumar Singh; Syeda Furruka Banu; Forhad U H Chowdhury; Kabir Ahmed Choudhury; Sylvie Chambon; Petia Radeva; Domenec Puig; Mohamed Abdel-Nasser edit   pdf
url  openurl
  Title SLSNet: Skin lesion segmentation using a lightweight generative adversarial network Type Journal Article
  Year 2021 Publication Expert Systems With Applications Abbreviated Journal ESWA  
  Volume 183 Issue Pages (down) 115433  
  Keywords  
  Abstract The determination of precise skin lesion boundaries in dermoscopic images using automated methods faces many challenges, most importantly, the presence of hair, inconspicuous lesion edges and low contrast in dermoscopic images, and variability in the color, texture and shapes of skin lesions. Existing deep learning-based skin lesion segmentation algorithms are expensive in terms of computational time and memory. Consequently, running such segmentation algorithms requires a powerful GPU and high bandwidth memory, which are not available in dermoscopy devices. Thus, this article aims to achieve precise skin lesion segmentation with minimum resources: a lightweight, efficient generative adversarial network (GAN) model called SLSNet, which combines 1-D kernel factorized networks, position and channel attention, and multiscale aggregation mechanisms with a GAN model. The 1-D kernel factorized network reduces the computational cost of 2D filtering. The position and channel attention modules enhance the discriminative ability between the lesion and non-lesion feature representations in spatial and channel dimensions, respectively. A multiscale block is also used to aggregate the coarse-to-fine features of input skin images and reduce the effect of the artifacts. SLSNet is evaluated on two publicly available datasets: ISBI 2017 and the ISIC 2018. Although SLSNet has only 2.35 million parameters, the experimental results demonstrate that it achieves segmentation results on a par with the state-of-the-art skin lesion segmentation methods with an accuracy of 97.61%, and Dice and Jaccard similarity coefficients of 90.63% and 81.98%, respectively. SLSNet can run at more than 110 frames per second (FPS) in a single GTX1080Ti GPU, which is faster than well-known deep learning-based image segmentation models, such as FCN. Therefore, SLSNet can be used for practical dermoscopic applications.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; no proj Approved no  
  Call Number Admin @ si @ SRA2021 Serial 3633  
Permanent link to this record
 

 
Author Andreea Glavan; Alina Matei; Petia Radeva; Estefania Talavera edit  url
openurl 
  Title Does our social life influence our nutritional behaviour? Understanding nutritional habits from egocentric photo-streams Type Journal Article
  Year 2021 Publication Expert Systems with Applications Abbreviated Journal ESWA  
  Volume 171 Issue Pages (down) 114506  
  Keywords  
  Abstract Nutrition and social interactions are both key aspects of the daily lives of humans. In this work, we propose a system to evaluate the influence of social interaction in the nutritional habits of a person from a first-person perspective. In order to detect the routine of an individual, we construct a nutritional behaviour pattern discovery model, which outputs routines over a number of days. Our method evaluates similarity of routines with respect to visited food-related scenes over the collected days, making use of Dynamic Time Warping, as well as considering social engagement and its correlation with food-related activities. The nutritional and social descriptors of the collected days are evaluated and encoded using an LSTM Autoencoder. Later, the obtained latent space is clustered to find similar days unaffected by outliers using the Isolation Forest method. Moreover, we introduce a new score metric to evaluate the performance of the proposed algorithm. We validate our method on 104 days and more than 100 k egocentric images gathered by 7 users. Several different visualizations are evaluated for the understanding of the findings. Our results demonstrate good performance and applicability of our proposed model for social-related nutritional behaviour understanding. At the end, relevant applications of the model are discussed by analysing the discovered routine of particular individuals.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; no proj Approved no  
  Call Number Admin @ si @ GMR2021 Serial 3634  
Permanent link to this record
 

 
Author Mohammad Momeny; Ali Asghar Neshat; Ahmad Jahanbakhshi; Majid Mahmoudi; Yiannis Ampatzidis; Petia Radeva edit  url
openurl 
  Title Grading and fraud detection of saffron via learning-to-augment incorporated Inception-v4 CNN Type Journal Article
  Year 2023 Publication Food Control Abbreviated Journal FC  
  Volume 147 Issue Pages (down) 109554  
  Keywords  
  Abstract Saffron is a well-known product in the food industry. It is one of the spices that are sometimes adulterated with the sole motive of gaining more economic profit. Today, machine vision systems are widely used in controlling the quality of food and agricultural products as a new, non-destructive, and inexpensive approach. In this study, a machine vision system based on deep learning was used to detect fraud and saffron quality. A dataset of 1869 images was created and categorized in 6 classes including: dried saffron stigma using a dryer; dried saffron stigma using pressing method; pure stem of saffron; sunflower; saffron stem mixed with food coloring; and corn silk mixed with food coloring. A Learning-to-Augment incorporated Inception-v4 Convolutional Neural Network (LAII-v4 CNN) was developed for grading and fraud detection of saffron in images captured by smartphones. The best policies of data augmentation were selected with the proposed LAII-v4 CNN using images corrupted by Gaussian, speckle, and impulse noise to address overfitting the model. The proposed LAII-v4 CNN compared with regular CNN-based methods and traditional classifiers. Ensemble of Bagged Decision Trees, Ensemble of Boosted Decision Trees, k-Nearest Neighbor, Random Under-sampling Boosted Trees, and Support Vector Machine were used for classification of the features extracted by Histograms of Oriented Gradients and Local Binary Patterns, and selected by the Principal Component Analysis. The results showed that the proposed LAII-v4 CNN with an accuracy of 99.5% has achieved the best performance by employing batch normalization, Dropout, and leaky ReLU.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB Approved no  
  Call Number Admin @ si @ MNJ2023 Serial 3882  
Permanent link to this record
 

 
Author Eduardo Aguilar; Bhalaji Nagarajan; Beatriz Remeseiro; Petia Radeva edit  doi
openurl 
  Title Bayesian deep learning for semantic segmentation of food images Type Journal Article
  Year 2022 Publication Computers and Electrical Engineering Abbreviated Journal CEE  
  Volume 103 Issue Pages (down) 108380  
  Keywords Deep learning; Uncertainty quantification; Bayesian inference; Image segmentation; Food analysis  
  Abstract Deep learning has provided promising results in various applications; however, algorithms tend to be overconfident in their predictions, even though they may be entirely wrong. Particularly for critical applications, the model should provide answers only when it is very sure of them. This article presents a Bayesian version of two different state-of-the-art semantic segmentation methods to perform multi-class segmentation of foods and estimate the uncertainty about the given predictions. The proposed methods were evaluated on three public pixel-annotated food datasets. As a result, we can conclude that Bayesian methods improve the performance achieved by the baseline architectures and, in addition, provide information to improve decision-making. Furthermore, based on the extracted uncertainty map, we proposed three measures to rank the images according to the degree of noisy annotations they contained. Note that the top 135 images ranked by one of these measures include more than half of the worst-labeled food images.  
  Address October 2022  
  Corporate Author Thesis  
  Publisher Science Direct Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB Approved no  
  Call Number Admin @ si @ ANR2022 Serial 3763  
Permanent link to this record
 

 
Author Estefania Talavera; Carolin Wuerich; Nicolai Petkov; Petia Radeva edit  url
doi  openurl
  Title Topic modelling for routine discovery from egocentric photo-streams Type Journal Article
  Year 2020 Publication Pattern Recognition Abbreviated Journal PR  
  Volume 104 Issue Pages (down) 107330  
  Keywords Routine; Egocentric vision; Lifestyle; Behaviour analysis; Topic modelling  
  Abstract Developing tools to understand and visualize lifestyle is of high interest when addressing the improvement of habits and well-being of people. Routine, defined as the usual things that a person does daily, helps describe the individuals’ lifestyle. With this paper, we are the first ones to address the development of novel tools for automatic discovery of routine days of an individual from his/her egocentric images. In the proposed model, sequences of images are firstly characterized by semantic labels detected by pre-trained CNNs. Then, these features are organized in temporal-semantic documents to later be embedded into a topic models space. Finally, Dynamic-Time-Warping and Spectral-Clustering methods are used for final day routine/non-routine discrimination. Moreover, we introduce a new EgoRoutine-dataset, a collection of 104 egocentric days with more than 100.000 images recorded by 7 users. Results show that routine can be discovered and behavioural patterns can be observed.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; no proj Approved no  
  Call Number Admin @ si @ TWP2020 Serial 3435  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: