toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Ruben Tito; Dimosthenis Karatzas; Ernest Valveny edit   pdf
url  openurl
  Title Document Collection Visual Question Answering Type Conference Article
  Year 2021 Publication 16th International Conference on Document Analysis and Recognition Abbreviated Journal  
  Volume 12822 Issue Pages 778-792  
  Keywords Document collection; Visual Question Answering  
  Abstract Current tasks and methods in Document Understanding aims to process documents as single elements. However, documents are usually organized in collections (historical records, purchase invoices), that provide context useful for their interpretation. To address this problem, we introduce Document Collection Visual Question Answering (DocCVQA) a new dataset and related task, where questions are posed over a whole collection of document images and the goal is not only to provide the answer to the given question, but also to retrieve the set of documents that contain the information needed to infer the answer. Along with the dataset we propose a new evaluation metric and baselines which provide further insights to the new dataset and task.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICDAR  
  Notes DAG; 600.121 Approved no  
  Call Number Admin @ si @ TKV2021 Serial (up) 3622  
Permanent link to this record
 

 
Author Ruben Tito; Minesh Mathew; C.V. Jawahar; Ernest Valveny; Dimosthenis Karatzas edit   pdf
url  openurl
  Title ICDAR 2021 Competition on Document Visual Question Answering Type Conference Article
  Year 2021 Publication 16th International Conference on Document Analysis and Recognition Abbreviated Journal  
  Volume Issue Pages 635-649  
  Keywords  
  Abstract In this report we present results of the ICDAR 2021 edition of the Document Visual Question Challenges. This edition complements the previous tasks on Single Document VQA and Document Collection VQA with a newly introduced on Infographics VQA. Infographics VQA is based on a new dataset of more than 5, 000 infographics images and 30, 000 question-answer pairs. The winner methods have scored 0.6120 ANLS in Infographics VQA task, 0.7743 ANLSL in Document Collection VQA task and 0.8705 ANLS in Single Document VQA. We present a summary of the datasets used for each task, description of each of the submitted methods and the results and analysis of their performance. A summary of the progress made on Single Document VQA since the first edition of the DocVQA 2020 challenge is also presented.  
  Address VIRTUAL; Lausanne; Suissa; September 2021  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICDAR  
  Notes DAG; 600.121 Approved no  
  Call Number Admin @ si @ TMJ2021 Serial (up) 3624  
Permanent link to this record
 

 
Author Minesh Mathew; Viraj Bagal; Ruben Tito; Dimosthenis Karatzas; Ernest Valveny; C.V. Jawahar edit   pdf
url  doi
openurl 
  Title InfographicVQA Type Conference Article
  Year 2022 Publication Winter Conference on Applications of Computer Vision Abbreviated Journal  
  Volume Issue Pages 1697-1706  
  Keywords Document Analysis Datasets; Evaluation and Comparison of Vision Algorithms; Vision and Languages  
  Abstract Infographics communicate information using a combination of textual, graphical and visual elements. This work explores the automatic understanding of infographic images by using a Visual Question Answering technique. To this end, we present InfographicVQA, a new dataset comprising a diverse collection of infographics and question-answer annotations. The questions require methods that jointly reason over the document layout, textual content, graphical elements, and data visualizations. We curate the dataset with an emphasis on questions that require elementary reasoning and basic arithmetic skills. For VQA on the dataset, we evaluate two Transformer-based strong baselines. Both the baselines yield unsatisfactory results compared to near perfect human performance on the dataset. The results suggest that VQA on infographics--images that are designed to communicate information quickly and clearly to human brain--is ideal for benchmarking machine understanding of complex document images. The dataset is available for download at docvqa. org  
  Address Virtual; Waikoloa; Hawai; USA; January 2022  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference WACV  
  Notes DAG; 600.155 Approved no  
  Call Number MBT2022 Serial (up) 3625  
Permanent link to this record
 

 
Author Alejandro Cartas; Petia Radeva; Mariella Dimiccoli edit  url
openurl 
  Title Modeling long-term interactions to enhance action recognition Type Conference Article
  Year 2021 Publication 25th International Conference on Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 10351-10358  
  Keywords  
  Abstract In this paper, we propose a new approach to under-stand actions in egocentric videos that exploits the semantics of object interactions at both frame and temporal levels. At the frame level, we use a region-based approach that takes as input a primary region roughly corresponding to the user hands and a set of secondary regions potentially corresponding to the interacting objects and calculates the action score through a CNN formulation. This information is then fed to a Hierarchical LongShort-Term Memory Network (HLSTM) that captures temporal dependencies between actions within and across shots. Ablation studies thoroughly validate the proposed approach, showing in particular that both levels of the HLSTM architecture contribute to performance improvement. Furthermore, quantitative comparisons show that the proposed approach outperforms the state-of-the-art in terms of action recognition on standard benchmarks,without relying on motion information  
  Address January 2021  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICPR  
  Notes MILAB; Approved no  
  Call Number Admin @ si @ CRD2021 Serial (up) 3626  
Permanent link to this record
 

 
Author Giovanni Maria Farinella; Petia Radeva; Jose Braz; Kadi Bouatouch edit  url
openurl 
  Title Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (Volume 4) Type Book Whole
  Year 2021 Publication Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications. VISIGRAPP 2021 Abbreviated Journal  
  Volume 4 Issue Pages  
  Keywords  
  Abstract This book contains the proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2021) which was organized and sponsored by the Institute for Systems and Technologies of Information, Control and Communication (INSTICC), endorsed by the International Association for Pattern Recognition (IAPR), and in cooperation with the ACM Special Interest Group on Graphics and Interactive Techniques (SIGGRAPH), the European Association for Computer Graphics (EUROGRAPHICS), the EUROGRAPHICS Portuguese Chapter, the VRVis Center for Virtual Reality and Visualization Forschungs-GmbH, the French Association for Computer Graphics (AFIG), and the Society for Imaging Science and Technology (IS&T). The proceedings here published demonstrate new and innovative solutions and highlight technical problems in each field that are challenging and worthy of being disseminated to the interested research audiences. VISIGRAPP 2021 was organized to promote a discussion forum about the conference’s research topics between researchers, developers, manufacturers and end-users, and to establish guidelines in the development of more advanced solutions. This year VISIGRAPP was, exceptionally, held as a web-based event, due to the COVID-19 pandemic, from 8 – 10 February. We received a high number of paper submissions for this edition of VISIGRAPP, 371 in total, with contributions from 52 countries. This attests to the success and global dimension of VISIGRAPP. To evaluate each submission, we used a hierarchical process of double-blind evaluation where each paper was reviewed by two to six experts from the International Program Committee (IPC). The IPC selected for oral presentation and for publication as full papers 12 papers from GRAPP, 8 from HUCAPP, 11 papers from IVAPP, and 56 papers from VISAPP, which led to a result for the full-paper acceptance ratio of 24% and a high-quality program. Apart from the above full papers, the conference program also features 118 short papers and 67 poster presentations. We hope that these conference proceedings, which are submitted for indexation by Thomson Reuters Conference Proceedings Citation Index, SCOPUS, DBLP, Semantic Scholar, Google Scholar, EI and Microsoft Academic, will help the Computer Vision, Imaging, Visualization, Computer Graphics and Human-Computer Interaction communities to find interesting research work. Moreover, we are proud to inform that the program also includes three plenary keynote lectures, given by internationally distinguished researchers, namely Federico Tombari (Google and Technical University of Munich, Germany), Dieter Schmalstieg (Graz University of Technology, Austria) and Nathalie Henry Riche (Microsoft Research, United States), thus contributing to increase the overall quality of the conference and to provide a deeper understanding of the conference’s interest fields. Furthermore, a short list of the presented papers will be selected to be extended into a forthcoming book of VISIGRAPP Selected Papers to be published by Springer during 2021 in the CCIS series. Moreover, a short list of presented papers will be selected for publication of extended and revised versions in a special issue of the Springer Nature Computer Science journal. All papers presented at this conference will be available at the SCITEPRESS Digital Library. Three awards are delivered at the closing session, to recognize the best conference paper, the best student paper and the best poster for each of the four conferences. There is also an award for best industrial paper to be delivered at the closing session for VISAPP. We would like to express our thanks, first of all, to the authors of the technical papers, whose work and dedication made it possible to put together a program that we believe to be very exciting and of high technical quality. Next, we would like to thank the Area Chairs, all the members of the program committee and auxiliary reviewers, who helped us with their expertise and time. We would also like to thank the invited speakers for their invaluable contribution and for sharing their vision in their talks. Finally, we gratefully acknowledge the professional support of the INSTICC team for all organizational processes, especially given the need to introduce online streaming, forum management, direct messaging facilitation and other web-based activities in order to make it possible for VISIGRAPP 2021 authors to present their work and share ideas with colleagues in spite of the logistic difficulties caused by the current pandemic situation. We wish you all an exciting conference. We hope to meet you again for the next edition of VISIGRAPP, details of which are available at http://www. visigrapp.org  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference VISIGRAPP  
  Notes MILAB Approved no  
  Call Number Admin @ si @ FRB2021a Serial (up) 3627  
Permanent link to this record
 

 
Author Giovanni Maria Farinella; Petia Radeva; Jose Braz; Kadi Bouatouch edit  url
openurl 
  Title Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications – (Volume 5) Type Book Whole
  Year 2021 Publication Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications – VISIGRAPP 2021 Abbreviated Journal  
  Volume 5 Issue Pages  
  Keywords  
  Abstract This book contains the proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2021) which was organized and sponsored by the Institute for Systems and Technologies of Information, Control and Communication (INSTICC), endorsed by the International Association for Pattern Recognition (IAPR), and in cooperation with the ACM Special Interest Group on Graphics and Interactive Techniques (SIGGRAPH), the European Association for Computer Graphics (EUROGRAPHICS), the EUROGRAPHICS Portuguese Chapter, the VRVis Center for Virtual Reality and Visualization Forschungs-GmbH, the French Association for Computer Graphics (AFIG), and the Society for Imaging Science and Technology (IS&T). The proceedings here published demonstrate new and innovative solutions and highlight technical problems in each field that are challenging and worthy of being disseminated to the interested research audiences. VISIGRAPP 2021 was organized to promote a discussion forum about the conference’s research topics between researchers, developers, manufacturers and end-users, and to establish guidelines in the development of more advanced solutions. This year VISIGRAPP was, exceptionally, held as a web-based event, due to the COVID-19 pandemic, from 8 – 10 February. We received a high number of paper submissions for this edition of VISIGRAPP, 371 in total, with contributions from 52 countries. This attests to the success and global dimension of VISIGRAPP. To evaluate each submission, we used a hierarchical process of double-blind evaluation where each paper was reviewed by two to six experts from the International Program Committee (IPC). The IPC selected for oral presentation and for publication as full papers 12 papers from GRAPP, 8 from HUCAPP, 11 papers from IVAPP, and 56 papers from VISAPP, which led to a result for the full-paper acceptance ratio of 24% and a high-quality program. Apart from the above full papers, the conference program also features 118 short papers and 67 poster presentations. We hope that these conference proceedings, which are submitted for indexation by Thomson Reuters Conference Proceedings Citation Index, SCOPUS, DBLP, Semantic Scholar, Google Scholar, EI and Microsoft Academic, will help the Computer Vision, Imaging, Visualization, Computer Graphics and Human-Computer Interaction communities to find interesting research work. Moreover, we are proud to inform that the program also includes three plenary keynote lectures, given by internationally distinguished researchers, namely Federico Tombari (Google and Technical University of Munich, Germany), Dieter Schmalstieg (Graz University of Technology, Austria) and Nathalie Henry Riche (Microsoft Research, United States), thus contributing to increase the overall quality of the conference and to provide a deeper understanding of the conference’s interest fields. Furthermore, a short list of the presented papers will be selected to be extended into a forthcoming book of VISIGRAPP Selected Papers to be published by Springer during 2021 in the CCIS series. Moreover, a short list of presented papers will be selected for publication of extended and revised versions in a special issue of the Springer Nature Computer Science journal. All papers presented at this conference will be available at the SCITEPRESS Digital Library. Three awards are delivered at the closing session, to recognize the best conference paper, the best student paper and the best poster for each of the four conferences. There is also an award for best industrial paper to be delivered at the closing session for VISAPP. We would like to express our thanks, first of all, to the authors of the technical papers, whose work and dedication made it possible to put together a program that we believe to be very exciting and of high technical quality. Next, we would like to thank the Area Chairs, all the members of the program committee and auxiliary reviewers, who helped us with their expertise and time. We would also like to thank the invited speakers for their invaluable contribution and for sharing their vision in their talks. Finally, we gratefully acknowledge the professional support of the INSTICC team for all organizational processes, especially given the need to introduce online streaming, forum management, direct messaging facilitation and other web-based activities in order to make it possible for VISIGRAPP 2021 authors to present their work and share ideas with colleagues in spite of the logistic difficulties caused by the current pandemic situation. We wish you all an exciting conference. We hope to meet you again for the next edition of VISIGRAPP, details of which are available at http://www. visigrapp.org.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference VISIGRAPP  
  Notes MILAB Approved no  
  Call Number Admin @ si @ FRB2021b Serial (up) 3628  
Permanent link to this record
 

 
Author Ahmed M. A. Salih; Ilaria Boscolo Galazzo; Zahra Zahra Raisi-Estabragh; Steffen E. Petersen; Polyxeni Gkontra; Karim Lekadir; Gloria Menegaz; Petia Radeva edit  url
doi  openurl
  Title A new scheme for the assessment of the robustness of Explainable Methods Applied to Brain Age estimation Type Conference Article
  Year 2021 Publication 34th International Symposium on Computer-Based Medical Systems Abbreviated Journal  
  Volume Issue Pages 492-497  
  Keywords  
  Abstract Deep learning methods show great promise in a range of settings including the biomedical field. Explainability of these models is important in these fields for building end-user trust and to facilitate their confident deployment. Although several Machine Learning Interpretability tools have been proposed so far, there is currently no recognized evaluation standard to transfer the explainability results into a quantitative score. Several measures have been proposed as proxies for quantitative assessment of explainability methods. However, the robustness of the list of significant features provided by the explainability methods has not been addressed. In this work, we propose a new proxy for assessing the robustness of the list of significant features provided by two explainability methods. Our validation is defined at functionality-grounded level based on the ranked correlation statistical index and demonstrates its successful application in the framework of brain aging estimation. We assessed our proxy to estimate brain age using neuroscience data. Our results indicate small variability and high robustness in the considered explainability methods using this new proxy.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CBMS  
  Notes MILAB; no proj Approved no  
  Call Number Admin @ si @ SBZ2021 Serial (up) 3629  
Permanent link to this record
 

 
Author Manisha Das; Deep Gupta; Petia Radeva; Ashwini M. Bakde edit  url
openurl 
  Title Multi-scale decomposition-based CT-MR neurological image fusion using optimized bio-inspired spiking neural model with meta-heuristic optimization Type Journal Article
  Year 2021 Publication International Journal of Imaging Systems and Technology Abbreviated Journal IMA  
  Volume 31 Issue 4 Pages 2170-2188  
  Keywords  
  Abstract Multi-modal medical image fusion plays an important role in clinical diagnosis and works as an assistance model for clinicians. In this paper, a computed tomography-magnetic resonance (CT-MR) image fusion model is proposed using an optimized bio-inspired spiking feedforward neural network in different decomposition domains. First, source images are decomposed into base (low-frequency) and detail (high-frequency) layer components. Low-frequency subbands are fused using texture energy measures to capture the local energy, contrast, and small edges in the fused image. High-frequency coefficients are fused using firing maps obtained by pixel-activated neural model with the optimized parameters using three different optimization techniques such as differential evolution, cuckoo search, and gray wolf optimization, individually. In the optimization model, a fitness function is computed based on the edge index of resultant fused images, which helps to extract and preserve sharp edges available in the source CT and MR images. To validate the fusion performance, a detailed comparative analysis is presented among the proposed and state-of-the-art methods in terms of quantitative and qualitative measures along with computational complexity. Experimental results show that the proposed method produces a significantly better visual quality of fused images meanwhile outperforms the existing methods.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; no menciona Approved no  
  Call Number Admin @ si @ DGR2021a Serial (up) 3630  
Permanent link to this record
 

 
Author Vacit Oguz Yazici edit  isbn
openurl 
  Title Towards Smart Fashion: Visual Recognition of Products and Attributes Type Book Whole
  Year 2022 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Artificial intelligence is innovating the fashion industry by proposing new applications and solutions to the problems encountered by researchers and engineers working in the industry. In this thesis, we address three of these problems. In the first part of the thesis, we tackle the problem of multi-label image classification which is very related to fashion attribute recognition. In the second part of the thesis, we address two problems that are specific to fashion. Firstly, we address the problem of main product detection which is the task of associating correct image parts (e.g. bounding boxes) with the fashion product being sold. Secondly, we address the problem of color naming for multicolored fashion items. The task of multi-label image classification consists in assigning various concepts such as objects or attributes to images. Usually, there are dependencies that can be learned between the concepts to capture label correlations (chair and table classes are more likely to co-exist than chair and giraffe).
If we treat the multi-label image classification problem as an orderless set prediction problem, we can exploit recurrent neural networks (RNN) to capture label correlations. However, RNNs are trained to predict ordered sequences of tokens, so if the order of the predicted sequence is different than the order of the ground truth sequence, there will be penalization although the predictions are correct. Therefore, in the first part of the thesis, we propose an orderless loss function which will order the labels in the ground truth sequence dynamically in a way that the minimum loss is achieved. This results in a significant improvement of RNN models on multi-label image classification over the previous methods.
However, RNNs suffer from long term dependencies when the cardinality of set grows bigger. The decoding process might stop early if the current hidden state cannot find any object and outputs the termination token. This would cause the remaining classes not to be predicted and lower recall metric. Transformers can be used to avoid the long term dependency problem exploiting their selfattention modules that process sequential data simultaneously. Consequently, we propose a novel transformer model for multi-label image classification which surpasses the state-of-the-art results by a large margin.
In the second part of thesis, we focus on two fashion-specific problems. Main product detection is the task of associating image parts with the fashion product that is being sold, generally using associated textual metadata (product title or description). Normally, in fashion e-commerces, products are represented by multiple images where a person wears the product along with other fashion items. If all the fashion items in the images are marked with bounding boxes, we can use the textual metadata to decide which item is the main product. The initial work treated each of these images independently, discarding the fact that they all belong to the same product. In this thesis, we represent the bounding boxes from all the images as nodes in a fully connected graph. This allows the algorithm to learn relations between the nodes during training and take the entire context into account for the final decision. Our algorithm results in a significant improvement of the state-ofthe-art.
Moreover, we address the problem of color naming for multicolored fashion items, which is a challenging task due to the external factors such as illumination changes or objects that act as clutter. In the context of multi-label classification, the vaguely defined lines between the classes in the color space cause ambiguity. For example, a shade of blue which is very close to green might cause the model to incorrectly predict the color blue and green at the same time. Based on this, models trained for color naming are expected to recognize the colors and their quantities in both single colored and multicolored fashion items. Therefore, in this thesis, we propose a novel architecture with an additional head that explicitly estimates the number of colors in fashion items. This removes the ambiguity problem and results in better color naming performance.
 
  Address January 2022  
  Corporate Author Thesis Ph.D. thesis  
  Publisher IMPRIMA Place of Publication Editor Joost Van de Weijer;Arnau Ramisa  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-84-122714-6-1 Medium  
  Area Expedition Conference  
  Notes LAMP Approved no  
  Call Number Admin @ si @ Ogu2022 Serial (up) 3631  
Permanent link to this record
 

 
Author Marta Diez-Ferrer; Debora Gil; Elena Carreño; Susana Padrones; Samantha Aso; Vanesa Vicens; Noelia Cubero de Frutos; Rosa Lopez Lisbona; Carles Sanchez; Agnes Borras; Antoni Rosell edit   pdf
url  openurl
  Title Positive Airway Pressure-Enhanced CT to Improve Virtual Bronchoscopic Navigation Type Journal Article
  Year 2017 Publication European Respiratory Journal Abbreviated Journal ERJ  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes IAM Approved no  
  Call Number Admin @ si @ DGC2017b Serial (up) 3632  
Permanent link to this record
 

 
Author Md. Mostafa Kamal Sarker; Hatem A. Rashwan; Farhan Akram; Vivek Kumar Singh; Syeda Furruka Banu; Forhad U H Chowdhury; Kabir Ahmed Choudhury; Sylvie Chambon; Petia Radeva; Domenec Puig; Mohamed Abdel-Nasser edit   pdf
url  openurl
  Title SLSNet: Skin lesion segmentation using a lightweight generative adversarial network Type Journal Article
  Year 2021 Publication Expert Systems With Applications Abbreviated Journal ESWA  
  Volume 183 Issue Pages 115433  
  Keywords  
  Abstract The determination of precise skin lesion boundaries in dermoscopic images using automated methods faces many challenges, most importantly, the presence of hair, inconspicuous lesion edges and low contrast in dermoscopic images, and variability in the color, texture and shapes of skin lesions. Existing deep learning-based skin lesion segmentation algorithms are expensive in terms of computational time and memory. Consequently, running such segmentation algorithms requires a powerful GPU and high bandwidth memory, which are not available in dermoscopy devices. Thus, this article aims to achieve precise skin lesion segmentation with minimum resources: a lightweight, efficient generative adversarial network (GAN) model called SLSNet, which combines 1-D kernel factorized networks, position and channel attention, and multiscale aggregation mechanisms with a GAN model. The 1-D kernel factorized network reduces the computational cost of 2D filtering. The position and channel attention modules enhance the discriminative ability between the lesion and non-lesion feature representations in spatial and channel dimensions, respectively. A multiscale block is also used to aggregate the coarse-to-fine features of input skin images and reduce the effect of the artifacts. SLSNet is evaluated on two publicly available datasets: ISBI 2017 and the ISIC 2018. Although SLSNet has only 2.35 million parameters, the experimental results demonstrate that it achieves segmentation results on a par with the state-of-the-art skin lesion segmentation methods with an accuracy of 97.61%, and Dice and Jaccard similarity coefficients of 90.63% and 81.98%, respectively. SLSNet can run at more than 110 frames per second (FPS) in a single GTX1080Ti GPU, which is faster than well-known deep learning-based image segmentation models, such as FCN. Therefore, SLSNet can be used for practical dermoscopic applications.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; no proj Approved no  
  Call Number Admin @ si @ SRA2021 Serial (up) 3633  
Permanent link to this record
 

 
Author Andreea Glavan; Alina Matei; Petia Radeva; Estefania Talavera edit  url
openurl 
  Title Does our social life influence our nutritional behaviour? Understanding nutritional habits from egocentric photo-streams Type Journal Article
  Year 2021 Publication Expert Systems with Applications Abbreviated Journal ESWA  
  Volume 171 Issue Pages 114506  
  Keywords  
  Abstract Nutrition and social interactions are both key aspects of the daily lives of humans. In this work, we propose a system to evaluate the influence of social interaction in the nutritional habits of a person from a first-person perspective. In order to detect the routine of an individual, we construct a nutritional behaviour pattern discovery model, which outputs routines over a number of days. Our method evaluates similarity of routines with respect to visited food-related scenes over the collected days, making use of Dynamic Time Warping, as well as considering social engagement and its correlation with food-related activities. The nutritional and social descriptors of the collected days are evaluated and encoded using an LSTM Autoencoder. Later, the obtained latent space is clustered to find similar days unaffected by outliers using the Isolation Forest method. Moreover, we introduce a new score metric to evaluate the performance of the proposed algorithm. We validate our method on 104 days and more than 100 k egocentric images gathered by 7 users. Several different visualizations are evaluated for the understanding of the findings. Our results demonstrate good performance and applicability of our proposed model for social-related nutritional behaviour understanding. At the end, relevant applications of the model are discussed by analysing the discovered routine of particular individuals.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; no proj Approved no  
  Call Number Admin @ si @ GMR2021 Serial (up) 3634  
Permanent link to this record
 

 
Author Giuseppe Pezzano; Oliver Diaz; Vicent Ribas Ripoll; Petia Radeva edit  url
doi  openurl
  Title CoLe-CNN+: Context learning – Convolutional neural network for COVID-19-Ground-Glass-Opacities detection and segmentation Type Journal Article
  Year 2021 Publication Computers in Biology and Medicine Abbreviated Journal CBM  
  Volume 136 Issue Pages 104689  
  Keywords  
  Abstract The most common tool for population-wide COVID-19 identification is the Reverse Transcription-Polymerase Chain Reaction test that detects the presence of the virus in the throat (or sputum) in swab samples. This test has a sensitivity between 59% and 71%. However, this test does not provide precise information regarding the extension of the pulmonary infection. Moreover, it has been proven that through the reading of a computed tomography (CT) scan, a clinician can provide a more complete perspective of the severity of the disease. Therefore, we propose a comprehensive system for fully-automated COVID-19 detection and lesion segmentation from CT scans, powered by deep learning strategies to support decision-making process for the diagnosis of COVID-19.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; no menciona Approved no  
  Call Number Admin @ si @ PDR2021 Serial (up) 3635  
Permanent link to this record
 

 
Author Manisha Das; Deep Gupta; Petia Radeva; Ashwini M. Bakde edit  url
doi  openurl
  Title Optimized CT-MR neurological image fusion framework using biologically inspired spiking neural model in hybrid ℓ1 - ℓ0 layer decomposition domain Type Journal Article
  Year 2021 Publication Biomedical Signal Processing and Control Abbreviated Journal BSPC  
  Volume 68 Issue Pages 102535  
  Keywords  
  Abstract Medical image fusion plays an important role in the clinical diagnosis of several critical neurological diseases by merging complementary information available in multimodal images. In this paper, a novel CT-MR neurological image fusion framework is proposed using an optimized biologically inspired feedforward neural model in two-scale hybrid ℓ1 − ℓ0 decomposition domain using gray wolf optimization to preserve the structural as well as texture information present in source CT and MR images. Initially, the source images are subjected to two-scale ℓ1 − ℓ0 decomposition with optimized parameters, giving a scale-1 detail layer, a scale-2 detail layer and a scale-2 base layer. Two detail layers at scale-1 and 2 are fused using an optimized biologically inspired neural model and weighted average scheme based on local energy and modified spatial frequency to maximize the preservation of edges and local textures, respectively, while the scale-2 base layer gets fused using choose max rule to preserve the background information. To optimize the hyper-parameters of hybrid ℓ1 − ℓ0 decomposition and biologically inspired neural model, a fitness function is evaluated based on spatial frequency and edge index of the resultant fused image obtained by adding all the fused components. The fusion performance is analyzed by conducting extensive experiments on different CT-MR neurological images. Experimental results indicate that the proposed method provides better-fused images and outperforms the other state-of-the-art fusion methods in both visual and quantitative assessments.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; no proj Approved no  
  Call Number Admin @ si @ DGR2021b Serial (up) 3636  
Permanent link to this record
 

 
Author Alina Matei; Andreea Glavan; Petia Radeva; Estefania Talavera edit  url
doi  openurl
  Title Towards Eating Habits Discovery in Egocentric Photo-Streams Type Journal Article
  Year 2021 Publication IEEE Access Abbreviated Journal ACCESS  
  Volume 9 Issue Pages 17495-17506  
  Keywords  
  Abstract Eating habits are learned throughout the early stages of our lives. However, it is not easy to be aware of how our food-related routine affects our healthy living. In this work, we address the unsupervised discovery of nutritional habits from egocentric photo-streams. We build a food-related behavioral pattern discovery model, which discloses nutritional routines from the activities performed throughout the days. To do so, we rely on Dynamic-Time-Warping for the evaluation of similarity among the collected days. Within this framework, we present a simple, but robust and fast novel classification pipeline that outperforms the state-of-the-art on food-related image classification with a weighted accuracy and F-score of 70% and 63%, respectively. Later, we identify days composed of nutritional activities that do not describe the habits of the person as anomalies in the daily life of the user with the Isolation Forest method. Furthermore, we show an application for the identification of food-related scenes when the camera wearer eats in isolation. Results have shown the good performance of the proposed model and its relevance to visualize the nutritional habits of individuals.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; no proj Approved no  
  Call Number Admin @ si @ MGR2021 Serial (up) 3637  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: