|   | 
Details
   web
Records
Author (up) David Rotger; Petia Radeva; Oriol Rodriguez
Title Vessel Tortuosity Extraction from IVUS Images Type Miscellaneous
Year 2006 Publication Computers in Cardiology (CiC´06), 33: 689–692 Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address Valencia (Spain)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB Approved no
Call Number BCNPCL @ bcnpcl @ RRR2006 Serial 762
Permanent link to this record
 

 
Author (up) David Sanchez-Mendoza; David Masip; Agata Lapedriza
Title Emotion recognition from mid-level features Type Journal Article
Year 2015 Publication Pattern Recognition Letters Abbreviated Journal PRL
Volume 67 Issue Part 1 Pages 66–74
Keywords Facial expression; Emotion recognition; Action units; Computer vision
Abstract In this paper we present a study on the use of Action Units as mid-level features for automatically recognizing basic and subtle emotions. We propose a representation model based on mid-level facial muscular movement features. We encode these movements dynamically using the Facial Action Coding System, and propose to use these intermediate features based on Action Units (AUs) to classify emotions. AUs activations are detected fusing a set of spatiotemporal geometric and appearance features. The algorithm is validated in two applications: (i) the recognition of 7 basic emotions using the publicly available Cohn-Kanade database, and (ii) the inference of subtle emotional cues in the Newscast database. In this second scenario, we consider emotions that are perceived cumulatively in longer periods of time. In particular, we Automatically classify whether video shoots from public News TV channels refer to Good or Bad news. To deal with the different video lengths we propose a Histogram of Action Units and compute it using a sliding window strategy on the frame sequences. Our approach achieves accuracies close to human perception.
Address
Corporate Author Thesis
Publisher Elsevier B.V. Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0167-8655 ISBN Medium
Area Expedition Conference
Notes OR;MV Approved no
Call Number Admin @ si @ SML2015 Serial 2746
Permanent link to this record
 

 
Author (up) David Vazquez
Title Domain Adaptation of Virtual and Real Worlds for Pedestrian Detection Type Book Whole
Year 2013 Publication PhD Thesis, Universitat de Barcelona-CVC Abbreviated Journal
Volume 1 Issue 1 Pages 1-105
Keywords Pedestrian Detection; Domain Adaptation
Abstract Pedestrian detection is of paramount interest for many applications, e.g. Advanced Driver Assistance Systems, Intelligent Video Surveillance and Multimedia systems. Most promising pedestrian detectors rely on appearance-based classifiers trained with annotated data. However, the required annotation step represents an intensive and subjective task for humans, what makes worth to minimize their intervention in this process by using computational tools like realistic virtual worlds. The reason to use these kind of tools relies in the fact that they allow the automatic generation of precise and rich annotations of visual information. Nevertheless, the use of this kind of data comes with the following question: can a pedestrian appearance model learnt with virtual-world data work successfully for pedestrian detection in real-world scenarios?. To answer this question, we conduct different experiments that suggest a positive answer. However, the pedestrian classifiers trained with virtual-world data can suffer the so called dataset shift problem as real-world based classifiers does. Accordingly, we have designed different domain adaptation techniques to face this problem, all of them integrated in a same framework (V-AYLA). We have explored different methods to train a domain adapted pedestrian classifiers by collecting a few pedestrian samples from the target domain (real world) and combining them with many samples of the source domain (virtual world). The extensive experiments we present show that pedestrian detectors developed within the V-AYLA framework do achieve domain adaptation. Ideally, we would like to adapt our system without any human intervention. Therefore, as a first proof of concept we also propose an unsupervised domain adaptation technique that avoids human intervention during the adaptation process. To the best of our knowledge, this Thesis work is the first demonstrating adaptation of virtual and real worlds for developing an object detector. Last but not least, we also assessed a different strategy to avoid the dataset shift that consists in collecting real-world samples and retrain with them in such a way that no bounding boxes of real-world pedestrians have to be provided. We show that the generated classifier is competitive with respect to the counterpart trained with samples collected by manually annotating pedestrian bounding boxes. The results presented on this Thesis not only end with a proposal for adapting a virtual-world pedestrian detector to the real world, but also it goes further by pointing out a new methodology that would allow the system to adapt to different situations, which we hope will provide the foundations for future research in this unexplored area.
Address Barcelona
Corporate Author Thesis Ph.D. thesis
Publisher Ediciones Graficas Rey Place of Publication Barcelona Editor Antonio Lopez;Daniel Ponsa
Language English Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-84-940530-1-6 Medium
Area Expedition Conference
Notes adas Approved yes
Call Number ADAS @ adas @ Vaz2013 Serial 2276
Permanent link to this record
 

 
Author (up) David Vazquez; Antonio Lopez
Title Intrusion Classification in Intelligent Video Surveillance Systems Type Report
Year 2008 Publication Estudis d'Enginyeria Superior en Informática Abbreviated Journal UAB
Volume Issue Pages
Keywords Human detection; Car detection; Intrusion detection
Abstract An intelligent video surveillance system (IVS) is a camera-based installation able to process in real-time the images coming from the cameras. The aim is to automatically warn about different events of interest at the moment they happen. Daview system of Davantis is a com mercial example of IVS system. The problems addressed by any IVS system, and so Daview, are so challenging that none IVS system is perfect, thus, they need continuous improvement. Accordingly, this project aims to study different approaches in order to outperform current Daview performance, in particular, we bet for improving its classification core. We present an in deep study of the state of the art on IVS systems, as well as on how Daview works. Based on that knowledge, we propose four possibilities for improving Daview classification capabilities: improve existent classifiers; improve existing classifiers combination; create new classifiers and create new classifier-based architectures. Our main contribution has been the incorporation of state-of-the-art feature selection and machine learning techniques for the classification tasks, a viewpoint not fully addressed in current Daview system. After a comprehensive quantitative evaluation we will see how one of our proposals clearly outperforms the overall performance of current Daview system. In particular the classification core that we finally propose consists in an AdaBoost One-Against-All architecture that uses appearance and motion features that were already present in current Daview system
Address Bellaterra, Spain
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference PFC
Notes ADAS Approved no
Call Number ADAS @ adas @ VL2008a Serial 1670
Permanent link to this record
 

 
Author (up) David Vazquez; Antonio Lopez; Daniel Ponsa
Title Unsupervised Domain Adaptation of Virtual and Real Worlds for Pedestrian Detection Type Conference Article
Year 2012 Publication 21st International Conference on Pattern Recognition Abbreviated Journal
Volume Issue Pages 3492 - 3495
Keywords Pedestrian Detection; Domain Adaptation; Virtual worlds
Abstract Vision-based object detectors are crucial for different applications. They rely on learnt object models. Ideally, we would like to deploy our vision system in the scenario where it must operate, and lead it to self-learn how to distinguish the objects of interest, i.e., without human intervention. However, the learning of each object model requires labelled samples collected through a tiresome manual process. For instance, we are interested in exploring the self-training of a pedestrian detector for driver assistance systems. Our first approach to avoid manual labelling consisted in the use of samples coming from realistic computer graphics, so that their labels are automatically available [12]. This would make possible the desired self-training of our pedestrian detector. However, as we showed in [14], between virtual and real worlds it may be a dataset shift. In order to overcome it, we propose the use of unsupervised domain adaptation techniques that avoid human intervention during the adaptation process. In particular, this paper explores the use of the transductive SVM (T-SVM) learning algorithm in order to adapt virtual and real worlds for pedestrian detection (Fig. 1).
Address Tsukuba Science City, Japan
Corporate Author Thesis
Publisher IEEE Place of Publication Tsukuba Science City, JAPAN Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1051-4651 ISBN 978-1-4673-2216-4 Medium
Area Expedition Conference ICPR
Notes ADAS Approved no
Call Number ADAS @ adas @ VLP2012 Serial 1981
Permanent link to this record
 

 
Author (up) David Vazquez; Antonio Lopez; Daniel Ponsa; David Geronimo
Title Interactive Training of Human Detectors Type Book Chapter
Year 2013 Publication Multiodal Interaction in Image and Video Applications Abbreviated Journal
Volume 48 Issue Pages 169-182
Keywords Pedestrian Detection; Virtual World; AdaBoost; Domain Adaptation
Abstract Image based human detection remains as a challenging problem. Most promising detectors rely on classifiers trained with labelled samples. However, labelling is a manual labor intensive step. To overcome this problem we propose to collect images of pedestrians from a virtual city, i.e., with automatic labels, and train a pedestrian detector with them, which works fine when such virtual-world data are similar to testing one, i.e., real-world pedestrians in urban areas. When testing data is acquired in different conditions than training one, e.g., human detection in personal photo albums, dataset shift appears. In previous work, we cast this problem as one of domain adaptation and solve it with an active learning procedure. In this work, we focus on the same problem but evaluating a different set of faster to compute features, i.e., Haar, EOH and their combination. In particular, we train a classifier with virtual-world data, using such features and Real AdaBoost as learning machine. This classifier is applied to real-world training images. Then, a human oracle interactively corrects the wrong detections, i.e., few miss detections are manually annotated and some false ones are pointed out too. A low amount of manual annotation is fixed as restriction. Real- and virtual-world difficult samples are combined within what we call cool world and we retrain the classifier with this data. Our experiments show that this adapted classifier is equivalent to the one trained with only real-world data but requiring 90% less manual annotations.
Address Springer Heidelberg New York Dordrecht London
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication Editor
Language English Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1868-4394 ISBN 978-3-642-35931-6 Medium
Area Expedition Conference
Notes ADAS; 600.057; 600.054; 605.203 Approved no
Call Number VLP2013; ADAS @ adas @ vlp2013 Serial 2193
Permanent link to this record
 

 
Author (up) David Vazquez; Antonio Lopez; Daniel Ponsa; Javier Marin
Title Virtual Worlds and Active Learning for Human Detection Type Conference Article
Year 2011 Publication 13th International Conference on Multimodal Interaction Abbreviated Journal
Volume Issue Pages 393-400
Keywords Pedestrian Detection; Human detection; Virtual; Domain Adaptation; Active Learning
Abstract Image based human detection is of paramount interest due to its potential applications in fields such as advanced driving assistance, surveillance and media analysis. However, even detecting non-occluded standing humans remains a challenge of intensive research. The most promising human detectors rely on classifiers developed in the discriminative paradigm, i.e., trained with labelled samples. However, labeling is a manual intensive step, especially in cases like human detection where it is necessary to provide at least bounding boxes framing the humans for training. To overcome such problem, some authors have proposed the use of a virtual world where the labels of the different objects are obtained automatically. This means that the human models (classifiers) are learnt using the appearance of rendered images, i.e., using realistic computer graphics. Later, these models are used for human detection in images of the real world. The results of this technique are surprisingly good. However, these are not always as good as the classical approach of training and testing with data coming from the same camera, or similar ones. Accordingly, in this paper we address the challenge of using a virtual world for gathering (while playing a videogame) a large amount of automatically labelled samples (virtual humans and background) and then training a classifier that performs equal, in real-world images, than the one obtained by equally training from manually labelled real-world samples. For doing that, we cast the problem as one of domain adaptation. In doing so, we assume that a small amount of manually labelled samples from real-world images is required. To collect these labelled samples we propose a non-standard active learning technique. Therefore, ultimately our human model is learnt by the combination of virtual and real world labelled samples (Fig. 1), which has not been done before. We present quantitative results showing that this approach is valid.
Address Alicante, Spain
Corporate Author Thesis
Publisher ACM DL Place of Publication New York, NY, USA, USA Editor
Language English Summary Language English Original Title Virtual Worlds and Active Learning for Human Detection
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-1-4503-0641-6 Medium
Area Expedition Conference ICMI
Notes ADAS Approved yes
Call Number ADAS @ adas @ VLP2011a Serial 1683
Permanent link to this record
 

 
Author (up) David Vazquez; Antonio Lopez; Daniel Ponsa; Javier Marin
Title Cool world: domain adaptation of virtual and real worlds for human detection using active learning Type Conference Article
Year 2011 Publication NIPS Domain Adaptation Workshop: Theory and Application Abbreviated Journal NIPS-DA
Volume Issue Pages
Keywords Pedestrian Detection; Virtual; Domain Adaptation; Active Learning
Abstract Image based human detection is of paramount interest for different applications. The most promising human detectors rely on discriminatively learnt classifiers, i.e., trained with labelled samples. However, labelling is a manual intensive task, especially in cases like human detection where it is necessary to provide at least bounding boxes framing the humans for training. To overcome such problem, in Marin et al. we have proposed the use of a virtual world where the labels of the different objects are obtained automatically. This means that the human models (classifiers) are learnt using the appearance of realistic computer graphics. Later, these models are used for human detection in images of the real world. The results of this technique are surprisingly good. However, these are not always as good as the classical approach of training and testing with data coming from the same camera and the same type of scenario. Accordingly, in Vazquez et al. we cast the problem as one of supervised domain adaptation. In doing so, we assume that a small amount of manually labelled samples from real-world images is required. To collect these labelled samples we use an active learning technique. Thus, ultimately our human model is learnt by the combination of virtual- and real-world labelled samples which, to the best of our knowledge, was not done before. Here, we term such combined space cool world. In this extended abstract we summarize our proposal, and include quantitative results from Vazquez et al. showing its validity.
Address Granada, Spain
Corporate Author Thesis
Publisher Place of Publication Granada, Spain Editor
Language English Summary Language English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference DA-NIPS
Notes ADAS Approved no
Call Number ADAS @ adas @ VLP2011b Serial 1756
Permanent link to this record
 

 
Author (up) David Vazquez; David Geronimo; Antonio Lopez
Title The effect of the distance in pedestrian detection Type Report
Year 2009 Publication CVC Technical Report Abbreviated Journal
Volume 149 Issue Pages
Keywords Pedestrian Detection
Abstract Pedestrian accidents are one of the leading preventable causes of death. In order to reduce the number of accidents, in the last decade the pedestrian protection systems have been introduced, a special type of advanced driver assistance systems, in witch an on-board camera explores the road ahead for possible collisions with pedestrians in order to warn the driver or perform braking actions. As a result of the variability of the appearance, pose and size, pedestrian detection is a very challenging task. So many techniques, models and features have been proposed to solve the problem. As the appearance of pedestrians varies signi cantly as a function of distance, a system based on multiple classi ers specialized on diferent depths is likely to improve the overall performance with respect to a typical system based on a general detector. Accordingly, the main aim of this work is to explore the e ect of the distance in pedestrian detection. We have evaluated three pedestrian detectors (HOG, HAAR and EOH) in two di erent databases (INRIA and Daimler09) for two di erent sizes (small and big). By a extensive set of experiments we answer to questions like which datasets and evaluation methods are the most adequate, which is the best method for each size of the pedestrians and why or how do the method optimum parameters vary with respect to the distance
Address
Corporate Author Thesis Master's thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference M.Sc.
Notes ADAS Approved no
Call Number ADAS @ adas @ VGL2009 Serial 1669
Permanent link to this record
 

 
Author (up) David Vazquez; Enrique Cabello
Title Empleo de sistemas biométricos faciales aplicados al reconocimiento de personas en aeropuertos Type Report
Year 2007 Publication Ingeniería Técnica en Informática de Sistemas Abbreviated Journal URJC
Volume Issue Pages
Keywords Surveillance; Face detection; Face recognition
Abstract El presente proyecto se desarrolló a lo largo del año 2005 y 2006, probando un prototipo de un sistema de verificación facial con imágenes extraídas de las cámaras de video-vigilancia del aeropuerto de Barajas. Se diseñaron varios experimentos, agrupados en dos clases. En el primer tipo, el sistema es entre- nado con imágenes obtenidas en condiciones de laboratorio y luego probado con imágenes extraídas de las cámaras de video-vigilancia del aeropuerto de Barajas. En el segundo caso, tanto las imágenes de entrenamiento como las de prueba corresponden a imágenes extraídas de Barajas.
Se ha desarrollado un sistema completo, que incluye adquisición y digitalización de las imágenes, localización y recorte de las caras en escena, verificación de sujetos y obtención de resultados. Los resultados muestran que, en general, un sistema de verificación facial basado en imágenes puede ser una valiosa ayuda a un operario que deba estar vigilando amplias zonas.
Address
Corporate Author Thesis Bachelor's thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes invisible;ADAS Approved no
Call Number ADAS @ adas @ VC2007a Serial 1671
Permanent link to this record
 

 
Author (up) David Vazquez; Javier Marin; Antonio Lopez; Daniel Ponsa; David Geronimo
Title Virtual and Real World Adaptation for Pedestrian Detection Type Journal Article
Year 2014 Publication IEEE Transactions on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI
Volume 36 Issue 4 Pages 797-809
Keywords Domain Adaptation; Pedestrian Detection
Abstract Pedestrian detection is of paramount interest for many applications. Most promising detectors rely on discriminatively learnt classifiers, i.e., trained with annotated samples. However, the annotation step is a human intensive and subjective task worth to be minimized. By using virtual worlds we can automatically obtain precise and rich annotations. Thus, we face the question: can a pedestrian appearance model learnt in realistic virtual worlds work successfully for pedestrian detection in realworld images?. Conducted experiments show that virtual-world based training can provide excellent testing accuracy in real world, but it can also suffer the dataset shift problem as real-world based training does. Accordingly, we have designed a domain adaptation framework, V-AYLA, in which we have tested different techniques to collect a few pedestrian samples from the target domain (real world) and combine them with the many examples of the source domain (virtual world) in order to train a domain adapted pedestrian classifier that will operate in the target domain. V-AYLA reports the same detection accuracy than when training with many human-provided pedestrian annotations and testing with real-world images of the same domain. To the best of our knowledge, this is the first work demonstrating adaptation of virtual and real worlds for developing an object detector.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0162-8828 ISBN Medium
Area Expedition Conference
Notes ADAS; 600.057; 600.054; 600.076 Approved no
Call Number ADAS @ adas @ VML2014 Serial 2275
Permanent link to this record
 

 
Author (up) David Vazquez; Jiaolong Xu; Sebastian Ramos; Antonio Lopez; Daniel Ponsa
Title Weakly Supervised Automatic Annotation of Pedestrian Bounding Boxes Type Conference Article
Year 2013 Publication CVPR Workshop on Ground Truth – What is a good dataset? Abbreviated Journal
Volume Issue Pages 706 - 711
Keywords Pedestrian Detection; Domain Adaptation
Abstract Among the components of a pedestrian detector, its trained pedestrian classifier is crucial for achieving the desired performance. The initial task of the training process consists in collecting samples of pedestrians and background, which involves tiresome manual annotation of pedestrian bounding boxes (BBs). Thus, recent works have assessed the use of automatically collected samples from photo-realistic virtual worlds. However, learning from virtual-world samples and testing in real-world images may suffer the dataset shift problem. Accordingly, in this paper we assess an strategy to collect samples from the real world and retrain with them, thus avoiding the dataset shift, but in such a way that no BBs of real-world pedestrians have to be provided. In particular, we train a pedestrian classifier based on virtual-world samples (no human annotation required). Then, using such a classifier we collect pedestrian samples from real-world images by detection. After, a human oracle rejects the false detections efficiently (weak annotation). Finally, a new classifier is trained with the accepted detections. We show that this classifier is competitive with respect to the counterpart trained with samples collected by manually annotating hundreds of pedestrian BBs.
Address Portland; Oregon; June 2013
Corporate Author Thesis
Publisher IEEE Place of Publication Editor
Language English Summary Language English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CVPRW
Notes ADAS; 600.054; 600.057; 601.217 Approved no
Call Number ADAS @ adas @ VXR2013a Serial 2219
Permanent link to this record
 

 
Author (up) David Vazquez; Jorge Bernal; F. Javier Sanchez; Gloria Fernandez Esparrach; Antonio Lopez; Adriana Romero; Michal Drozdzal; Aaron Courville
Title A Benchmark for Endoluminal Scene Segmentation of Colonoscopy Images Type Conference Article
Year 2017 Publication 31st International Congress and Exhibition on Computer Assisted Radiology and Surgery Abbreviated Journal
Volume Issue Pages
Keywords Deep Learning; Medical Imaging
Abstract Colorectal cancer (CRC) is the third cause of cancer death worldwide. Currently, the standard approach to reduce CRC-related mortality is to perform regular screening in search for polyps and colonoscopy is the screening tool of choice. The main limitations of this screening procedure are polyp miss-rate and inability to perform visual assessment of polyp malignancy. These drawbacks can be reduced by designing Decision Support Systems (DSS) aiming to help clinicians in the different stages of the procedure by providing endoluminal scene segmentation. Thus, in this paper, we introduce an extended benchmark of colonoscopy image, with the hope of establishing a new strong benchmark for colonoscopy image analysis research. We provide new baselines on this dataset by training standard fully convolutional networks (FCN) for semantic segmentation and significantly outperforming, without any further post-processing, prior results in endoluminal scene segmentation.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CARS
Notes ADAS; MV; 600.075; 600.085; 600.076; 601.281; 600.118 Approved no
Call Number ADAS @ adas @ VBS2017a Serial 2880
Permanent link to this record
 

 
Author (up) David Vazquez; Jorge Bernal; F. Javier Sanchez; Gloria Fernandez Esparrach; Antonio Lopez; Adriana Romero; Michal Drozdzal; Aaron Courville
Title A Benchmark for Endoluminal Scene Segmentation of Colonoscopy Images Type Journal Article
Year 2017 Publication Journal of Healthcare Engineering Abbreviated Journal JHCE
Volume Issue Pages 2040-2295
Keywords Colonoscopy images; Deep Learning; Semantic Segmentation
Abstract Colorectal cancer (CRC) is the third cause of cancer death world-wide. Currently, the standard approach to reduce CRC-related mortality is to perform regular screening in search for polyps and colonoscopy is the screening tool of choice. The main limitations of this screening procedure are polyp miss- rate and inability to perform visual assessment of polyp malignancy. These drawbacks can be reduced by designing Decision Support Systems (DSS) aim- ing to help clinicians in the different stages of the procedure by providing endoluminal scene segmentation. Thus, in this paper, we introduce an extended benchmark of colonoscopy image segmentation, with the hope of establishing a new strong benchmark for colonoscopy image analysis research. The proposed dataset consists of 4 relevant classes to inspect the endolumninal scene, tar- geting different clinical needs. Together with the dataset and taking advantage of advances in semantic segmentation literature, we provide new baselines by training standard fully convolutional networks (FCN). We perform a compar- ative study to show that FCN significantly outperform, without any further post-processing, prior results in endoluminal scene segmentation, especially with respect to polyp segmentation and localization.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ADAS; MV; 600.075; 600.085; 600.076; 601.281; 600.118 Approved no
Call Number VBS2017b Serial 2940
Permanent link to this record
 

 
Author (up) Dawid Rymarczyk; Joost van de Weijer; Bartosz Zielinski; Bartlomiej Twardowski
Title ICICLE: Interpretable Class Incremental Continual Learning. Dawid Rymarczyk Type Conference Article
Year 2023 Publication 20th IEEE International Conference on Computer Vision Abbreviated Journal
Volume Issue Pages 1887-1898
Keywords
Abstract Continual learning enables incremental learning of new tasks without forgetting those previously learned, resulting in positive knowledge transfer that can enhance performance on both new and old tasks. However, continual learning poses new challenges for interpretability, as the rationale behind model predictions may change over time, leading to interpretability concept drift. We address this problem by proposing Interpretable Class-InCremental LEarning (ICICLE), an exemplar-free approach that adopts a prototypical part-based approach. It consists of three crucial novelties: interpretability regularization that distills previously learned concepts while preserving user-friendly positive reasoning; proximity-based prototype initialization strategy dedicated to the fine-grained setting; and task-recency bias compensation devoted to prototypical parts. Our experimental results demonstrate that ICICLE reduces the interpretability concept drift and outperforms the existing exemplar-free methods of common class-incremental learning when applied to concept-based models.
Address Paris; France; October 2023
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICCV
Notes LAMP Approved no
Call Number Admin @ si @ RWZ2023 Serial 3947
Permanent link to this record