toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Bhaskar Chakraborty; Michael Holte; Thomas B. Moeslund; Jordi Gonzalez; Xavier Roca edit  doi
isbn  openurl
  Title A Selective Spatio-Temporal Interest Point Detector for Human Action Recognition in Complex Scenes Type Conference Article
  Year 2011 Publication 13th IEEE International Conference on Computer Vision Abbreviated Journal  
  Volume Issue Pages (up) 1776-1783  
  Keywords  
  Abstract Recent progress in the field of human action recognition points towards the use of Spatio-Temporal Interest Points (STIPs) for local descriptor-based recognition strategies. In this paper we present a new approach for STIP detection by applying surround suppression combined with local and temporal constraints. Our method is significantly different from existing STIP detectors and improves the performance by detecting more repeatable, stable and distinctive STIPs for human actors, while suppressing unwanted background STIPs. For action representation we use a bag-of-visual words (BoV) model of local N-jet features to build a vocabulary of visual-words. To this end, we introduce a novel vocabulary building strategy by combining spatial pyramid and vocabulary compression techniques, resulting in improved performance and efficiency. Action class specific Support Vector Machine (SVM) classifiers are trained for categorization of human actions. A comprehensive set of experiments on existing benchmark datasets, and more challenging datasets of complex scenes, validate our approach and show state-of-the-art performance.  
  Address Barcelona  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1550-5499 ISBN 978-1-4577-1101-5 Medium  
  Area Expedition Conference ICCV  
  Notes ISE Approved no  
  Call Number Admin @ si @ CHM2011 Serial 1811  
Permanent link to this record
 

 
Author R. Valenti; Theo Gevers edit  doi
openurl 
  Title Accurate Eye Center Location through Invariant Isocentric Patterns Type Journal Article
  Year 2012 Publication IEEE Transaction on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI  
  Volume 34 Issue 9 Pages (up) 1785-1798  
  Keywords  
  Abstract Impact factor 2010: 5.308
Impact factor 2011/12?: 5.96
Locating the center of the eyes allows for valuable information to be captured and used in a wide range of applications. Accurate eye center location can be determined using commercial eye-gaze trackers, but additional constraints and expensive hardware make these existing solutions unattractive and impossible to use on standard (i.e., visible wavelength), low-resolution images of eyes. Systems based solely on appearance are proposed in the literature, but their accuracy does not allow us to accurately locate and distinguish eye centers movements in these low-resolution settings. Our aim is to bridge this gap by locating the center of the eye within the area of the pupil on low-resolution images taken from a webcam or a similar device. The proposed method makes use of isophote properties to gain invariance to linear lighting changes (contrast and brightness), to achieve in-plane rotational invariance, and to keep low-computational costs. To further gain scale invariance, the approach is applied to a scale space pyramid. In this paper, we extensively test our approach for its robustness to changes in illumination, head pose, scale, occlusion, and eye rotation. We demonstrate that our system can achieve a significant improvement in accuracy over state-of-the-art techniques for eye center location in standard low-resolution imagery.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0162-8828 ISBN Medium  
  Area Expedition Conference  
  Notes ALTRES;ISE Approved no  
  Call Number Admin @ si @ VaG 2012a Serial 1849  
Permanent link to this record
 

 
Author Matej Kristan; Jiri Matas; Martin Danelljan; Michael Felsberg; Hyung Jin Chang; Luka Cehovin Zajc; Alan Lukezic; Ondrej Drbohlav; Zhongqun Zhang; Khanh-Tung Tran; Xuan-Son Vu; Johanna Bjorklund; Christoph Mayer; Yushan Zhang; Lei Ke; Jie Zhao; Gustavo Fernandez; Noor Al-Shakarji; Dong An; Michael Arens; Stefan Becker; Goutam Bhat; Sebastian Bullinger; Antoni B. Chan; Shijie Chang; Hanyuan Chen; Xin Chen; Yan Chen; Zhenyu Chen; Yangming Cheng; Yutao Cui; Chunyuan Deng; Jiahua Dong; Matteo Dunnhofer; Wei Feng; Jianlong Fu; Jie Gao; Ruize Han; Zeqi Hao; Jun-Yan He; Keji He; Zhenyu He; Xiantao Hu; Kaer Huang; Yuqing Huang; Yi Jiang; Ben Kang; Jin-Peng Lan; Hyungjun Lee; Chenyang Li; Jiahao Li; Ning Li; Wangkai Li; Xiaodi Li; Xin Li; Pengyu Liu; Yue Liu; Huchuan Lu; Bin Luo; Ping Luo; Yinchao Ma; Deshui Miao; Christian Micheloni; Kannappan Palaniappan; Hancheol Park; Matthieu Paul; HouWen Peng; Zekun Qian; Gani Rahmon; Norbert Scherer-Negenborn; Pengcheng Shao; Wooksu Shin; Elham Soltani Kazemi; Tianhui Song; Rainer Stiefelhagen; Rui Sun; Chuanming Tang; Zhangyong Tang; Imad Eddine Toubal; Jack Valmadre; Joost van de Weijer; Luc Van Gool; Jash Vira; Stephane Vujasinovic; Cheng Wan; Jia Wan; Dong Wang; Fei Wang; Feifan Wang; He Wang; Limin Wang; Song Wang; Yaowei Wang; Zhepeng Wang; Gangshan Wu; Jiannan Wu; Qiangqiang Wu; Xiaojun Wu; Anqi Xiao; Jinxia Xie; Chenlong Xu; Min Xu; Tianyang Xu; Yuanyou Xu; Bin Yan; Dawei Yang; Ming-Hsuan Yang; Tianyu Yang; Yi Yang; Zongxin Yang; Xuanwu Yin; Fisher Yu; Hongyuan Yu; Qianjin Yu; Weichen Yu; YongSheng Yuan; Zehuan Yuan; Jianlin Zhang; Lu Zhang; Tianzhu Zhang; Guodongfang Zhao; Shaochuan Zhao; Yaozong Zheng; Bineng Zhong; Jiawen Zhu; Xuefeng Zhu; Yueting Zhuang; ChengAo Zong; Kunlong Zuo edit   pdf
url  openurl
  Title The First Visual Object Tracking Segmentation VOTS2023 Challenge Results Type Conference Article
  Year 2023 Publication Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops Abbreviated Journal  
  Volume Issue Pages (up) 1796-1818  
  Keywords  
  Abstract The Visual Object Tracking Segmentation VOTS2023 challenge is the eleventh annual tracker benchmarking activity of the VOT initiative. This challenge is the first to merge short-term and long-term as well as single-target and multiple-target tracking with segmentation masks as the only target location specification. A new dataset was created; the ground truth has been withheld to prevent overfitting. New performance measures and evaluation protocols have been created along with a new toolkit and an evaluation server. Results of the presented 47 trackers indicate that modern tracking frameworks are well-suited to deal with convergence of short-term and long-term tracking and that multiple and single target tracking can be considered a single problem. A leaderboard, with participating trackers details, the source code, the datasets, and the evaluation kit are publicly available at the challenge website\footnote https://www.votchallenge.net/vots2023/.  
  Address Paris; France; October 2023  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICCVW  
  Notes LAMP Approved no  
  Call Number Admin @ si @ KMD2023 Serial 3939  
Permanent link to this record
 

 
Author Santiago Segui; Michal Drozdzal; Ekaterina Zaytseva; Fernando Azpiroz; Petia Radeva; Jordi Vitria edit   pdf
doi  openurl
  Title Detection of wrinkle frames in endoluminal videos using betweenness centrality measures for images Type Journal Article
  Year 2014 Publication IEEE Transactions on Information Technology in Biomedicine Abbreviated Journal TITB  
  Volume 18 Issue 6 Pages (up) 1831-1838  
  Keywords Wireless Capsule Endoscopy; Small Bowel Motility Dysfunction; Contraction Detection; Structured Prediction; Betweenness Centrality  
  Abstract Intestinal contractions are one of the most important events to diagnose motility pathologies of the small intestine. When visualized by wireless capsule endoscopy (WCE), the sequence of frames that represents a contraction is characterized by a clear wrinkle structure in the central frames that corresponds to the folding of the intestinal wall. In this paper we present a new method to robustly detect wrinkle frames in full WCE videos by using a new mid-level image descriptor that is based on a centrality measure proposed for graphs. We present an extended validation, carried out in a very large database, that shows that the proposed method achieves state of the art performance for this task.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes OR; MILAB; 600.046;MV Approved no  
  Call Number Admin @ si @ SDZ2014 Serial 2385  
Permanent link to this record
 

 
Author Lichao Zhang; Abel Gonzalez-Garcia; Joost Van de Weijer; Martin Danelljan; Fahad Shahbaz Khan edit   pdf
doi  openurl
  Title Synthetic Data Generation for End-to-End Thermal Infrared Tracking Type Journal Article
  Year 2019 Publication IEEE Transactions on Image Processing Abbreviated Journal TIP  
  Volume 28 Issue 4 Pages (up) 1837 - 1850  
  Keywords  
  Abstract The usage of both off-the-shelf and end-to-end trained deep networks have significantly improved the performance of visual tracking on RGB videos. However, the lack of large labeled datasets hampers the usage of convolutional neural networks for tracking in thermal infrared (TIR) images. Therefore, most state-of-the-art methods on tracking for TIR data are still based on handcrafted features. To address this problem, we propose to use image-to-image translation models. These models allow us to translate the abundantly available labeled RGB data to synthetic TIR data. We explore both the usage of paired and unpaired image translation models for this purpose. These methods provide us with a large labeled dataset of synthetic TIR sequences, on which we can train end-to-end optimal features for tracking. To the best of our knowledge, we are the first to train end-to-end features for TIR tracking. We perform extensive experiments on the VOT-TIR2017 dataset. We show that a network trained on a large dataset of synthetic TIR data obtains better performance than one trained on the available real TIR data. Combining both data sources leads to further improvement. In addition, when we combine the network with motion features, we outperform the state of the art with a relative gain of over 10%, clearly showing the efficiency of using synthetic data to train end-to-end TIR trackers.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes LAMP; 600.141; 600.120 Approved no  
  Call Number Admin @ si @ YGW2019 Serial 3228  
Permanent link to this record
 

 
Author Debora Gil; Aura Hernandez-Sabate; Mireia Brunat;Steven Jansen; Jordi Martinez-Vilalta edit   pdf
doi  openurl
  Title Structure-preserving smoothing of biomedical images Type Journal Article
  Year 2011 Publication Pattern Recognition Abbreviated Journal PR  
  Volume 44 Issue 9 Pages (up) 1842-1851  
  Keywords Non-linear smoothing; Differential geometry; Anatomical structures; segmentation; Cardiac magnetic resonance; Computerized tomography  
  Abstract Smoothing of biomedical images should preserve gray-level transitions between adjacent tissues, while restoring contours consistent with anatomical structures. Anisotropic diffusion operators are based on image appearance discontinuities (either local or contextual) and might fail at weak inter-tissue transitions. Meanwhile, the output of block-wise and morphological operations is prone to present a block structure due to the shape and size of the considered pixel neighborhood. In this contribution, we use differential geometry concepts to define a diffusion operator that restricts to image consistent level-sets. In this manner, the final state is a non-uniform intensity image presenting homogeneous inter-tissue transitions along anatomical structures, while smoothing intra-structure texture. Experiments on different types of medical images (magnetic resonance, computerized tomography) illustrate its benefit on a further process (such as segmentation) of images.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0031-3203 ISBN Medium  
  Area Expedition Conference  
  Notes IAM; ADAS Approved no  
  Call Number IAM @ iam @ GHB2011 Serial 1526  
Permanent link to this record
 

 
Author Marco Pedersoli; Andrea Vedaldi; Jordi Gonzalez; Xavier Roca edit   pdf
doi  openurl
  Title A coarse-to-fine approach for fast deformable object detection Type Journal Article
  Year 2015 Publication Pattern Recognition Abbreviated Journal PR  
  Volume 48 Issue 5 Pages (up) 1844-1853  
  Keywords  
  Abstract We present a method that can dramatically accelerate object detection with part based models. The method is based on the observation that the cost of detection is likely to be dominated by the cost of matching each part to the image, and not by the cost of computing the optimal configuration of the parts as commonly assumed. Therefore accelerating detection requires minimizing the number of
part-to-image comparisons. To this end we propose a multiple-resolutions hierarchical part based model and a corresponding coarse-to-fine inference procedure that recursively eliminates from the search space unpromising part
placements. The method yields a ten-fold speedup over the standard dynamic programming approach and is complementary to the cascade-of-parts approach of [9]. Compared to the latter, our method does not have parameters to be determined empirically, which simplifies its use during the training of the model. Most importantly, the two techniques can be combined to obtain a very significant speedup, of two orders of magnitude in some cases. We evaluate our method extensively on the PASCAL VOC and INRIA datasets, demonstrating a very high increase in the detection speed with little degradation of the accuracy.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ISE; 600.078; 602.005; 605.001; 302.012 Approved no  
  Call Number Admin @ si @ PVG2015 Serial 2628  
Permanent link to this record
 

 
Author Marcos V Conde; Florin Vasluianu; Javier Vazquez; Radu Timofte edit   pdf
url  openurl
  Title Perceptual image enhancement for smartphone real-time applications Type Conference Article
  Year 2023 Publication Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision Abbreviated Journal  
  Volume Issue Pages (up) 1848-1858  
  Keywords  
  Abstract Recent advances in camera designs and imaging pipelines allow us to capture high-quality images using smartphones. However, due to the small size and lens limitations of the smartphone cameras, we commonly find artifacts or degradation in the processed images. The most common unpleasant effects are noise artifacts, diffraction artifacts, blur, and HDR overexposure. Deep learning methods for image restoration can successfully remove these artifacts. However, most approaches are not suitable for real-time applications on mobile devices due to their heavy computation and memory requirements. In this paper, we propose LPIENet, a lightweight network for perceptual image enhancement, with the focus on deploying it on smartphones. Our experiments show that, with much fewer parameters and operations, our model can deal with the mentioned artifacts and achieve competitive performance compared with state-of-the-art methods on standard benchmarks. Moreover, to prove the efficiency and reliability of our approach, we deployed the model directly on commercial smartphones and evaluated its performance. Our model can process 2K resolution images under 1 second in mid-level commercial smartphones.  
  Address Waikoloa; Hawai; USA; January 2023  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference WACV  
  Notes MACO; CIC Approved no  
  Call Number Admin @ si @ CVV2023 Serial 3900  
Permanent link to this record
 

 
Author Ferran Diego; Daniel Ponsa; Joan Serrat; Antonio Lopez edit   pdf
openurl 
  Title Video Alignment for Change Detection Type Journal Article
  Year 2011 Publication IEEE Transactions on Image Processing Abbreviated Journal TIP  
  Volume 20 Issue 7 Pages (up) 1858-1869  
  Keywords video alignment  
  Abstract In this work, we address the problem of aligning two video sequences. Such alignment refers to synchronization, i.e., the establishment of temporal correspondence between frames of the first and second video, followed by spatial registration of all the temporally corresponding frames. Video synchronization and alignment have been attempted before, but most often in the relatively simple cases of fixed or rigidly attached cameras and simultaneous acquisition. In addition, restrictive assumptions have been applied, including linear time correspondence or the knowledge of the complete trajectories of corresponding scene points; to some extent, these assumptions limit the practical applicability of any solutions developed. We intend to solve the more general problem of aligning video sequences recorded by independently moving cameras that follow similar trajectories, based only on the fusion of image intensity and GPS information. The novelty of our approach is to pose the synchronization as a MAP inference problem on a Bayesian network including the observations from these two sensor types, which have been proved complementary. Alignment results are presented in the context of videos recorded from vehicles driving along the same track at different times, for different road types. In addition, we explore two applications of the proposed video alignment method, both based on change detection between aligned videos. One is the detection of vehicles, which could be of use in ADAS. The other is online difference spotting videos of surveillance rounds.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; IF Approved no  
  Call Number DPS 2011; ADAS @ adas @ dps2011 Serial 1705  
Permanent link to this record
 

 
Author Xialei Liu; Joost Van de Weijer; Andrew Bagdanov edit   pdf
url  doi
openurl 
  Title Exploiting Unlabeled Data in CNNs by Self-Supervised Learning to Rank Type Journal Article
  Year 2019 Publication IEEE Transactions on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI  
  Volume 41 Issue 8 Pages (up) 1862-1878  
  Keywords Task analysis;Training;Image quality;Visualization;Uncertainty;Labeling;Neural networks;Learning from rankings;image quality assessment;crowd counting;active learning  
  Abstract For many applications the collection of labeled data is expensive laborious. Exploitation of unlabeled data during training is thus a long pursued objective of machine learning. Self-supervised learning addresses this by positing an auxiliary task (different, but related to the supervised task) for which data is abundantly available. In this paper, we show how ranking can be used as a proxy task for some regression problems. As another contribution, we propose an efficient backpropagation technique for Siamese networks which prevents the redundant computation introduced by the multi-branch network architecture. We apply our framework to two regression problems: Image Quality Assessment (IQA) and Crowd Counting. For both we show how to automatically generate ranked image sets from unlabeled data. Our results show that networks trained to regress to the ground truth targets for labeled data and to simultaneously learn to rank unlabeled data obtain significantly better, state-of-the-art results for both IQA and crowd counting. In addition, we show that measuring network uncertainty on the self-supervised proxy task is a good measure of informativeness of unlabeled data. This can be used to drive an algorithm for active learning and we show that this reduces labeling effort by up to 50 percent.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes LAMP; 600.109; 600.106; 600.120 Approved no  
  Call Number LWB2019 Serial 3267  
Permanent link to this record
 

 
Author Dena Bazazian; Dimosthenis Karatzas; Andrew Bagdanov edit   pdf
doi  openurl
  Title Word Spotting in Scene Images based on Character Recognition Type Conference Article
  Year 2018 Publication IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops Abbreviated Journal  
  Volume Issue Pages (up) 1872-1874  
  Keywords  
  Abstract In this paper we address the problem of unconstrained Word Spotting in scene images. We train a Fully Convolutional Network to produce heatmaps of all the character classes. Then, we employ the Text Proposals approach and, via a rectangle classifier, detect the most likely rectangle for each query word based on the character attribute maps. We evaluate the proposed method on ICDAR2015 and show that it is capable of identifying and recognizing query words in natural scene images.  
  Address Salt Lake City; USA; June 2018  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPRW  
  Notes DAG; 600.129; 600.121 Approved no  
  Call Number BKB2018a Serial 3179  
Permanent link to this record
 

 
Author J. Pladellorens; Joan Serrat; A. Castell; M.J. Yzuel edit  openurl
  Title Using mathematical morphology to determine left ventricular contours. Type Journal
  Year 1993 Publication Physics in Medicine and Biology. Abbreviated Journal  
  Volume 37 Issue Pages (up) 1877––1894  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ PSC1993 Serial 146  
Permanent link to this record
 

 
Author Koen E.A. van de Sande; Jasper Uilings; Theo Gevers; Arnold Smeulders edit  doi
isbn  openurl
  Title Segmentation as Selective Search for Object Recognition Type Conference Article
  Year 2011 Publication 13th IEEE International Conference on Computer Vision Abbreviated Journal  
  Volume Issue Pages (up) 1879-1886  
  Keywords  
  Abstract For object recognition, the current state-of-the-art is based on exhaustive search. However, to enable the use of more expensive features and classifiers and thereby progress beyond the state-of-the-art, a selective search strategy is needed. Therefore, we adapt segmentation as a selective search by reconsidering segmentation: We propose to generate many approximate locations over few and precise object delineations because (1) an object whose location is never generated can not be recognised and (2) appearance and immediate nearby context are most effective for object recognition. Our method is class-independent and is shown to cover 96.7% of all objects in the Pascal VOC 2007 test set using only 1,536 locations per image. Our selective search enables the use of the more expensive bag-of-words method which we use to substantially improve the state-of-the-art by up to 8.5% for 8 out of 20 classes on the Pascal VOC 2010 detection challenge.  
  Address Barcelona  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1550-5499 ISBN 978-1-4577-1101-5 Medium  
  Area Expedition Conference ICCV  
  Notes ISE Approved no  
  Call Number Admin @ si @ SUG2011 Serial 1780  
Permanent link to this record
 

 
Author Ana Garcia Rodriguez; Jorge Bernal; F. Javier Sanchez; Henry Cordova; Rodrigo Garces Duran; Cristina Rodriguez de Miguel; Gloria Fernandez Esparrach edit  url
doi  openurl
  Title Polyp fingerprint: automatic recognition of colorectal polyps’ unique features Type Journal Article
  Year 2020 Publication Surgical Endoscopy and other Interventional Techniques Abbreviated Journal SEND  
  Volume 34 Issue 4 Pages (up) 1887-1889  
  Keywords  
  Abstract BACKGROUND:
Content-based image retrieval (CBIR) is an application of machine learning used to retrieve images by similarity on the basis of features. Our objective was to develop a CBIR system that could identify images containing the same polyp ('polyp fingerprint').

METHODS:
A machine learning technique called Bag of Words was used to describe each endoscopic image containing a polyp in a unique way. The system was tested with 243 white light images belonging to 99 different polyps (for each polyp there were at least two images representing it in two different temporal moments). Images were acquired in routine colonoscopies at Hospital Clínic using high-definition Olympus endoscopes. The method provided for each image the closest match within the dataset.

RESULTS:
The system matched another image of the same polyp in 221/243 cases (91%). No differences were observed in the number of correct matches according to Paris classification (protruded: 90.7% vs. non-protruded: 91.3%) and size (< 10 mm: 91.6% vs. > 10 mm: 90%).

CONCLUSIONS:
A CBIR system can match accurately two images containing the same polyp, which could be a helpful aid for polyp image recognition.

KEYWORDS:
Artificial intelligence; Colorectal polyps; Content-based image retrieval
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MV; no menciona Approved no  
  Call Number Admin @ si @ Serial 3403  
Permanent link to this record
 

 
Author Dawid Rymarczyk; Joost van de Weijer; Bartosz Zielinski; Bartlomiej Twardowski edit   pdf
url  openurl
  Title ICICLE: Interpretable Class Incremental Continual Learning. Dawid Rymarczyk Type Conference Article
  Year 2023 Publication 20th IEEE International Conference on Computer Vision Abbreviated Journal  
  Volume Issue Pages (up) 1887-1898  
  Keywords  
  Abstract Continual learning enables incremental learning of new tasks without forgetting those previously learned, resulting in positive knowledge transfer that can enhance performance on both new and old tasks. However, continual learning poses new challenges for interpretability, as the rationale behind model predictions may change over time, leading to interpretability concept drift. We address this problem by proposing Interpretable Class-InCremental LEarning (ICICLE), an exemplar-free approach that adopts a prototypical part-based approach. It consists of three crucial novelties: interpretability regularization that distills previously learned concepts while preserving user-friendly positive reasoning; proximity-based prototype initialization strategy dedicated to the fine-grained setting; and task-recency bias compensation devoted to prototypical parts. Our experimental results demonstrate that ICICLE reduces the interpretability concept drift and outperforms the existing exemplar-free methods of common class-incremental learning when applied to concept-based models.  
  Address Paris; France; October 2023  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICCV  
  Notes LAMP Approved no  
  Call Number Admin @ si @ RWZ2023 Serial 3947  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: