|   | 
Details
   web
Records
Author Marc Masana; Xialei Liu; Bartlomiej Twardowski; Mikel Menta; Andrew Bagdanov; Joost Van de Weijer
Title Class-incremental learning: survey and performance evaluation Type Journal Article
Year 2022 Publication IEEE Transactions on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI
Volume Issue Pages
Keywords
Abstract For future learning systems incremental learning is desirable, because it allows for: efficient resource usage by eliminating the need to retrain from scratch at the arrival of new data; reduced memory usage by preventing or limiting the amount of data required to be stored -- also important when privacy limitations are imposed; and learning that more closely resembles human learning. The main challenge for incremental learning is catastrophic forgetting, which refers to the precipitous drop in performance on previously learned tasks after learning a new one. Incremental learning of deep neural networks has seen explosive growth in recent years. Initial work focused on task incremental learning, where a task-ID is provided at inference time. Recently we have seen a shift towards class-incremental learning where the learner must classify at inference time between all classes seen in previous tasks without recourse to a task-ID. In this paper, we provide a complete survey of existing methods for incremental learning, and in particular we perform an extensive experimental evaluation on twelve class-incremental methods. We consider several new experimental scenarios, including a comparison of class-incremental methods on multiple large-scale datasets, investigation into small and large domain shifts, and comparison on various network architectures.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes LAMP; 600.120 Approved no
Call Number (down) Admin @ si @ MLT2022 Serial 3538
Permanent link to this record
 

 
Author Carolina Malagelada; F.De Lorio; Santiago Segui; S. Mendez; Michal Drozdzal; Jordi Vitria; Petia Radeva; J.Santos; Anna Accarino; Juan R. Malagelada; Fernando Azpiroz
Title Functional gut disorders or disordered gut function? Small bowel dysmotility evidenced by an original technique Type Journal Article
Year 2012 Publication Neurogastroenterology & Motility Abbreviated Journal NEUMOT
Volume 24 Issue 3 Pages 223-230
Keywords capsule endoscopy;computer vision analysis;machine learning technique;small bowel motility
Abstract JCR Impact Factor 2010: 3.349
Background This study aimed to determine the proportion of cases with abnormal intestinal motility among patients with functional bowel disorders. To this end, we applied an original method, previously developed in our laboratory, for analysis of endoluminal images obtained by capsule endoscopy. This novel technology is based on computer vision and machine learning techniques.
 Methods The endoscopic capsule (Pillcam SB1; Given Imaging, Yokneam, Israel) was administered to 80 patients with functional bowel disorders and 70 healthy subjects. Endoluminal image analysis was performed with a computer vision program developed for the evaluation of contractile events (luminal occlusions and radial wrinkles), non-contractile patterns (open tunnel and smooth wall patterns), type of content (secretions, chyme) and motion of wall and contents. Normality range and discrimination of abnormal cases were established by a machine learning technique. Specifically, an iterative classifier (one-class support vector machine) was applied in a random population of 50 healthy subjects as a training set and the remaining subjects (20 healthy subjects and 80 patients) as a test set.
 Key Results The classifier identified as abnormal 29% of patients with functional diseases of the bowel (23 of 80), and as normal 97% of healthy subjects (68 of 70) (P < 0.05 by chi-squared test). Patients identified as abnormal clustered in two groups, which exhibited either a hyper- or a hypodynamic motility pattern. The motor behavior was unrelated to clinical features.
Conclusions &  Inferences With appropriate methodology, abnormal intestinal motility can be demonstrated in a significant proportion of patients with functional bowel disorders, implying a pathologic disturbance of gut physiology.
Address
Corporate Author Thesis
Publisher Wiley Online Library Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB; OR; MV Approved no
Call Number (down) Admin @ si @ MLS2012 Serial 1830
Permanent link to this record
 

 
Author Carolina Malagelada; F.De Lorio; Fernando Azpiroz; Santiago Segui; Petia Radeva; Anna Accarino; J.Santos; Juan R. Malagelada
Title Intestinal Dysmotility in Patients with Functional Intestinal Disorders Demonstrated by Computer Vision Analysis of Capsule Endoscopy Images Type Conference Article
Year 2010 Publication 18th United European Gastroenterology Week Abbreviated Journal
Volume 56 Issue 3 Pages A19-20
Keywords
Abstract
Address Barcelona
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference UEGW
Notes MILAB Approved no
Call Number (down) Admin @ si @ MLA2010 Serial 1779
Permanent link to this record
 

 
Author Minesh Mathew; Dimosthenis Karatzas; C.V. Jawahar
Title DocVQA: A Dataset for VQA on Document Images Type Conference Article
Year 2021 Publication IEEE Winter Conference on Applications of Computer Vision Abbreviated Journal
Volume Issue Pages 2200-2209
Keywords
Abstract We present a new dataset for Visual Question Answering (VQA) on document images called DocVQA. The dataset consists of 50,000 questions defined on 12,000+ document images. Detailed analysis of the dataset in comparison with similar datasets for VQA and reading comprehension is presented. We report several baseline results by adopting existing VQA and reading comprehension models. Although the existing models perform reasonably well on certain types of questions, there is large performance gap compared to human performance (94.36% accuracy). The models need to improve specifically on questions where understanding structure of the document is crucial. The dataset, code and leaderboard are available at docvqa. org
Address Virtual; January 2021
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference WACV
Notes DAG; 600.121 Approved no
Call Number (down) Admin @ si @ MKJ2021 Serial 3498
Permanent link to this record
 

 
Author Patricia Marquez; H. Kause; A. Fuster; Aura Hernandez-Sabate; L. Florack; Debora Gil; Hans van Assen
Title Factors Affecting Optical Flow Performance in Tagging Magnetic Resonance Imaging Type Conference Article
Year 2014 Publication 17th International Conference on Medical Image Computing and Computer Assisted Intervention Abbreviated Journal
Volume 8896 Issue Pages 231-238
Keywords Optical flow; Performance Evaluation; Synthetic Database; ANOVA; Tagging Magnetic Resonance Imaging
Abstract Changes in cardiac deformation patterns are correlated with cardiac pathologies. Deformation can be extracted from tagging Magnetic Resonance Imaging (tMRI) using Optical Flow (OF) techniques. For applications of OF in a clinical setting it is important to assess to what extent the performance of a particular OF method is stable across di erent clinical acquisition artifacts. This paper presents a statistical validation framework, based on ANOVA, to assess the motion and appearance factors that have the largest in uence on OF accuracy drop.
In order to validate this framework, we created a database of simulated tMRI data including the most common artifacts of MRI and test three di erent OF methods, including HARP.
Address Boston; USA; September 2014
Corporate Author Thesis
Publisher Springer International Publishing Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN 0302-9743 ISBN 978-3-319-14677-5 Medium
Area Expedition Conference STACOM
Notes IAM; ADAS; 600.060; 601.145; 600.076; 600.075 Approved no
Call Number (down) Admin @ si @ MKF2014 Serial 2495
Permanent link to this record
 

 
Author Saad Minhas; Zeba Khanam; Shoaib Ehsan; Klaus McDonald Maier; Aura Hernandez-Sabate
Title Weather Classification by Utilizing Synthetic Data Type Journal Article
Year 2022 Publication Sensors Abbreviated Journal SENS
Volume 22 Issue 9 Pages 3193
Keywords Weather classification; synthetic data; dataset; autonomous car; computer vision; advanced driver assistance systems; deep learning; intelligent transportation systems
Abstract Weather prediction from real-world images can be termed a complex task when targeting classification using neural networks. Moreover, the number of images throughout the available datasets can contain a huge amount of variance when comparing locations with the weather those images are representing. In this article, the capabilities of a custom built driver simulator are explored specifically to simulate a wide range of weather conditions. Moreover, the performance of a new synthetic dataset generated by the above simulator is also assessed. The results indicate that the use of synthetic datasets in conjunction with real-world datasets can increase the training efficiency of the CNNs by as much as 74%. The article paves a way forward to tackle the persistent problem of bias in vision-based datasets.
Address 21 April 2022
Corporate Author Thesis
Publisher MDPI Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes IAM; 600.139; 600.159; 600.166; 600.145; Approved no
Call Number (down) Admin @ si @ MKE2022 Serial 3761
Permanent link to this record
 

 
Author Weiqing Min; Shuqiang Jiang; Jitao Sang; Huayang Wang; Xinda Liu; Luis Herranz
Title Being a Supercook: Joint Food Attributes and Multimodal Content Modeling for Recipe Retrieval and Exploration Type Journal Article
Year 2017 Publication IEEE Transactions on Multimedia Abbreviated Journal TMM
Volume 19 Issue 5 Pages 1100 - 1113
Keywords
Abstract This paper considers the problem of recipe-oriented image-ingredient correlation learning with multi-attributes for recipe retrieval and exploration. Existing methods mainly focus on food visual information for recognition while we model visual information, textual content (e.g., ingredients), and attributes (e.g., cuisine and course) together to solve extended recipe-oriented problems, such as multimodal cuisine classification and attribute-enhanced food image retrieval. As a solution, we propose a multimodal multitask deep belief network (M3TDBN) to learn joint image-ingredient representation regularized by different attributes. By grouping ingredients into visible ingredients (which are visible in the food image, e.g., “chicken” and “mushroom”) and nonvisible ingredients (e.g., “salt” and “oil”), M3TDBN is capable of learning both midlevel visual representation between images and visible ingredients and nonvisual representation. Furthermore, in order to utilize different attributes to improve the intermodality correlation, M3TDBN incorporates multitask learning to make different attributes collaborate each other. Based on the proposed M3TDBN, we exploit the derived deep features and the discovered correlations for three extended novel applications: 1) multimodal cuisine classification; 2) attribute-augmented cross-modal recipe image retrieval; and 3) ingredient and attribute inference from food images. The proposed approach is evaluated on the constructed Yummly dataset and the evaluation results have validated the effectiveness of the proposed approach.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes LAMP; 600.120 Approved no
Call Number (down) Admin @ si @ MJS2017 Serial 2964
Permanent link to this record
 

 
Author Ozge Mercanoglu Sincan; Julio C. S. Jacques Junior; Sergio Escalera; Hacer Yalim Keles
Title ChaLearn LAP Large Scale Signer Independent Isolated Sign Language Recognition Challenge: Design, Results and Future Research Type Conference Article
Year 2021 Publication Conference on Computer Vision and Pattern Recognition Workshops Abbreviated Journal
Volume Issue Pages 3467-3476
Keywords
Abstract The performances of Sign Language Recognition (SLR) systems have improved considerably in recent years. However, several open challenges still need to be solved to allow SLR to be useful in practice. The research in the field is in its infancy in regards to the robustness of the models to a large diversity of signs and signers, and to fairness of the models to performers from different demographics. This work summarises the ChaLearn LAP Large Scale Signer Independent Isolated SLR Challenge, organised at CVPR 2021 with the goal of overcoming some of the aforementioned challenges. We analyse and discuss the challenge design, top winning solutions and suggestions for future research. The challenge attracted 132 participants in the RGB track and 59 in the RGB+Depth track, receiving more than 1.5K submissions in total. Participants were evaluated using a new large-scale multi-modal Turkish Sign Language (AUTSL) dataset, consisting of 226 sign labels and 36,302 isolated sign video samples performed by 43 different signers. Winning teams achieved more than 96% recognition rate, and their approaches benefited from pose/hand/face estimation, transfer learning, external data, fusion/ensemble of modalities and different strategies to model spatio-temporal information. However, methods still fail to distinguish among very similar signs, in particular those sharing similar hand trajectories.
Address Virtual; June 2021
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CVPRW
Notes HuPBA; no proj Approved no
Call Number (down) Admin @ si @ MJE2021 Serial 3560
Permanent link to this record
 

 
Author Lasse Martensson; Anders Hast; Alicia Fornes
Title Word Spotting as a Tool for Scribal Attribution Type Conference Article
Year 2017 Publication 2nd Conference of the association of Digital Humanities in the Nordic Countries Abbreviated Journal
Volume Issue Pages 87-89
Keywords
Abstract
Address Gothenburg; Suecia; March 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-91-88348-83-8 Medium
Area Expedition Conference DHN
Notes DAG; 600.097; 600.121 Approved no
Call Number (down) Admin @ si @ MHF2017 Serial 2954
Permanent link to this record
 

 
Author Saad Minhas; Aura Hernandez-Sabate; Shoaib Ehsan; Klaus McDonald Maier
Title Effects of Non-Driving Related Tasks during Self-Driving mode Type Journal Article
Year 2022 Publication IEEE Transactions on Intelligent Transportation Systems Abbreviated Journal TITS
Volume 23 Issue 2 Pages 1391-1399
Keywords
Abstract Perception reaction time and mental workload have proven to be crucial in manual driving. Moreover, in highly automated cars, where most of the research is focusing on Level 4 Autonomous driving, take-over performance is also a key factor when taking road safety into account. This study aims to investigate how the immersion in non-driving related tasks affects the take-over performance of drivers in given scenarios. The paper also highlights the use of virtual simulators to gather efficient data that can be crucial in easing the transition between manual and autonomous driving scenarios. The use of Computer Aided Simulations is of absolute importance in this day and age since the automotive industry is rapidly moving towards Autonomous technology. An experiment comprising of 40 subjects was performed to examine the reaction times of driver and the influence of other variables in the success of take-over performance in highly automated driving under different circumstances within a highway virtual environment. The results reflect the relationship between reaction times under different scenarios that the drivers might face under the circumstances stated above as well as the importance of variables such as velocity in the success on regaining car control after automated driving. The implications of the results acquired are important for understanding the criteria needed for designing Human Machine Interfaces specifically aimed towards automated driving conditions. Understanding the need to keep drivers in the loop during automation, whilst allowing drivers to safely engage in other non-driving related tasks is an important research area which can be aided by the proposed study.
Address Feb. 2022
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes IAM; 600.139; 600.145 Approved no
Call Number (down) Admin @ si @ MHE2022 Serial 3468
Permanent link to this record
 

 
Author Adria Molina; Lluis Gomez; Oriol Ramos Terrades; Josep Llados
Title A Generic Image Retrieval Method for Date Estimation of Historical Document Collections Type Conference Article
Year 2022 Publication Document Analysis Systems.15th IAPR International Workshop, (DAS2022) Abbreviated Journal
Volume 13237 Issue Pages 583–597
Keywords Date estimation; Document retrieval; Image retrieval; Ranking loss; Smooth-nDCG
Abstract Date estimation of historical document images is a challenging problem, with several contributions in the literature that lack of the ability to generalize from one dataset to others. This paper presents a robust date estimation system based in a retrieval approach that generalizes well in front of heterogeneous collections. We use a ranking loss function named smooth-nDCG to train a Convolutional Neural Network that learns an ordination of documents for each problem. One of the main usages of the presented approach is as a tool for historical contextual retrieval. It means that scholars could perform comparative analysis of historical images from big datasets in terms of the period where they were produced. We provide experimental evaluation on different types of documents from real datasets of manuscript and newspaper images.
Address La Rochelle, France; May 22–25, 2022
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference DAS
Notes DAG; 600.140; 600.121 Approved no
Call Number (down) Admin @ si @ MGR2022 Serial 3694
Permanent link to this record
 

 
Author Alina Matei; Andreea Glavan; Petia Radeva; Estefania Talavera
Title Towards Eating Habits Discovery in Egocentric Photo-Streams Type Journal Article
Year 2021 Publication IEEE Access Abbreviated Journal ACCESS
Volume 9 Issue Pages 17495-17506
Keywords
Abstract Eating habits are learned throughout the early stages of our lives. However, it is not easy to be aware of how our food-related routine affects our healthy living. In this work, we address the unsupervised discovery of nutritional habits from egocentric photo-streams. We build a food-related behavioral pattern discovery model, which discloses nutritional routines from the activities performed throughout the days. To do so, we rely on Dynamic-Time-Warping for the evaluation of similarity among the collected days. Within this framework, we present a simple, but robust and fast novel classification pipeline that outperforms the state-of-the-art on food-related image classification with a weighted accuracy and F-score of 70% and 63%, respectively. Later, we identify days composed of nutritional activities that do not describe the habits of the person as anomalies in the daily life of the user with the Isolation Forest method. Furthermore, we show an application for the identification of food-related scenes when the camera wearer eats in isolation. Results have shown the good performance of the proposed model and its relevance to visualize the nutritional habits of individuals.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB; no proj Approved no
Call Number (down) Admin @ si @ MGR2021 Serial 3637
Permanent link to this record
 

 
Author C. Mariño; V.M. Gulias; M.G. Penas; M. Penedo; Victor Leboran; A. Mosquera; M.J. Carreira; David Lloret
Title Sistema de Interpretacion Automatica de Secuencias solo Basado en un Servidor vod. Type Miscellaneous
Year 2001 Publication Proceedings of the SIT2001. Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number (down) Admin @ si @ MGP2001 Serial 196
Permanent link to this record
 

 
Author Patricia Marquez; Debora Gil; R.Mester; Aura Hernandez-Sabate
Title Local Analysis of Confidence Measures for Optical Flow Quality Evaluation Type Conference Article
Year 2014 Publication 9th International Conference on Computer Vision Theory and Applications Abbreviated Journal
Volume 3 Issue Pages 450-457
Keywords Optical Flow; Confidence Measure; Performance Evaluation.
Abstract Optical Flow (OF) techniques facing the complexity of real sequences have been developed in the last years. Even using the most appropriate technique for our specific problem, at some points the output flow might fail to achieve the minimum error required for the system. Confidence measures computed from either input data or OF output should discard those points where OF is not accurate enough for its further use. It follows that evaluating the capabilities of a confidence measure for bounding OF error is as important as the definition
itself. In this paper we analyze different confidence measures and point out their advantages and limitations for their use in real world settings. We also explore the agreement with current tools for their evaluation of confidence measures performance.
Address Lisboa; January 2014
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference VISAPP
Notes IAM; ADAS; 600.044; 600.060; 600.057; 601.145; 600.076; 600.075 Approved no
Call Number (down) Admin @ si @ MGM2014 Serial 2432
Permanent link to this record
 

 
Author Minesh Mathew; Lluis Gomez; Dimosthenis Karatzas; C.V. Jawahar
Title Asking questions on handwritten document collections Type Journal Article
Year 2021 Publication International Journal on Document Analysis and Recognition Abbreviated Journal IJDAR
Volume 24 Issue Pages 235-249
Keywords
Abstract This work addresses the problem of Question Answering (QA) on handwritten document collections. Unlike typical QA and Visual Question Answering (VQA) formulations where the answer is a short text, we aim to locate a document snippet where the answer lies. The proposed approach works without recognizing the text in the documents. We argue that the recognition-free approach is suitable for handwritten documents and historical collections where robust text recognition is often difficult. At the same time, for human users, document image snippets containing answers act as a valid alternative to textual answers. The proposed approach uses an off-the-shelf deep embedding network which can project both textual words and word images into a common sub-space. This embedding bridges the textual and visual domains and helps us retrieve document snippets that potentially answer a question. We evaluate results of the proposed approach on two new datasets: (i) HW-SQuAD: a synthetic, handwritten document image counterpart of SQuAD1.0 dataset and (ii) BenthamQA: a smaller set of QA pairs defined on documents from the popular Bentham manuscripts collection. We also present a thorough analysis of the proposed recognition-free approach compared to a recognition-based approach which uses text recognized from the images using an OCR. Datasets presented in this work are available to download at docvqa.org.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes DAG; 600.121 Approved no
Call Number (down) Admin @ si @ MGK2021 Serial 3621
Permanent link to this record