|
Records |
Links |
|
Author |
Soumya Jahagirdar; Minesh Mathew; Dimosthenis Karatzas; CV Jawahar |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Understanding Video Scenes Through Text: Insights from Text-Based Video Question Answering |
Type ![sorted by Type field, descending order (down)](img/sort_desc.gif) |
Conference Article |
|
Year |
2023 |
Publication |
Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
Researchers have extensively studied the field of vision and language, discovering that both visual and textual content is crucial for understanding scenes effectively. Particularly, comprehending text in videos holds great significance, requiring both scene text understanding and temporal reasoning. This paper focuses on exploring two recently introduced datasets, NewsVideoQA and M4-ViteVQA, which aim to address video question answering based on textual content. The NewsVideoQA dataset contains question-answer pairs related to the text in news videos, while M4- ViteVQA comprises question-answer pairs from diverse categories like vlogging, traveling, and shopping. We provide an analysis of the formulation of these datasets on various levels, exploring the degree of visual understanding and multi-frame comprehension required for answering the questions. Additionally, the study includes experimentation with BERT-QA, a text-only model, which demonstrates comparable performance to the original methods on both datasets, indicating the shortcomings in the formulation of these datasets. Furthermore, we also look into the domain adaptation aspect by examining the effectiveness of training on M4-ViteVQA and evaluating on NewsVideoQA and vice-versa, thereby shedding light on the challenges and potential benefits of out-of-domain training. |
|
|
Address |
Paris; France; October 2023 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICCVW |
|
|
Notes |
DAG |
Approved |
no |
|
|
Call Number |
Admin @ si @ JMK2023 |
Serial |
3946 |
|
Permanent link to this record |
|
|
|
|
Author |
Dawid Rymarczyk; Joost van de Weijer; Bartosz Zielinski; Bartlomiej Twardowski |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
ICICLE: Interpretable Class Incremental Continual Learning. Dawid Rymarczyk |
Type ![sorted by Type field, descending order (down)](img/sort_desc.gif) |
Conference Article |
|
Year |
2023 |
Publication |
20th IEEE International Conference on Computer Vision |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
1887-1898 |
|
|
Keywords |
|
|
|
Abstract |
Continual learning enables incremental learning of new tasks without forgetting those previously learned, resulting in positive knowledge transfer that can enhance performance on both new and old tasks. However, continual learning poses new challenges for interpretability, as the rationale behind model predictions may change over time, leading to interpretability concept drift. We address this problem by proposing Interpretable Class-InCremental LEarning (ICICLE), an exemplar-free approach that adopts a prototypical part-based approach. It consists of three crucial novelties: interpretability regularization that distills previously learned concepts while preserving user-friendly positive reasoning; proximity-based prototype initialization strategy dedicated to the fine-grained setting; and task-recency bias compensation devoted to prototypical parts. Our experimental results demonstrate that ICICLE reduces the interpretability concept drift and outperforms the existing exemplar-free methods of common class-incremental learning when applied to concept-based models. |
|
|
Address |
Paris; France; October 2023 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICCV |
|
|
Notes |
LAMP |
Approved |
no |
|
|
Call Number |
Admin @ si @ RWZ2023 |
Serial |
3947 |
|
Permanent link to this record |
|
|
|
|
Author |
Jordy Van Landeghem; Ruben Tito; Lukasz Borchmann; Michal Pietruszka; Pawel Joziak; Rafal Powalski; Dawid Jurkiewicz; Mickael Coustaty; Bertrand Anckaert; Ernest Valveny; Matthew Blaschko; Sien Moens; Tomasz Stanislawek |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Document Understanding Dataset and Evaluation (DUDE) |
Type ![sorted by Type field, descending order (down)](img/sort_desc.gif) |
Conference Article |
|
Year |
2023 |
Publication |
20th IEEE International Conference on Computer Vision |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
19528-19540 |
|
|
Keywords |
|
|
|
Abstract |
We call on the Document AI (DocAI) community to re-evaluate current methodologies and embrace the challenge of creating more practically-oriented benchmarks. Document Understanding Dataset and Evaluation (DUDE) seeks to remediate the halted research progress in understanding visually-rich documents (VRDs). We present a new dataset with novelties related to types of questions, answers, and document layouts based on multi-industry, multi-domain, and multi-page VRDs of various origins and dates. Moreover, we are pushing the boundaries of current methods by creating multi-task and multi-domain evaluation setups that more accurately simulate real-world situations where powerful generalization and adaptation under low-resource settings are desired. DUDE aims to set a new standard as a more practical, long-standing benchmark for the community, and we hope that it will lead to future extensions and contributions that address real-world challenges. Finally, our work illustrates the importance of finding more efficient ways to model language, images, and layout in DocAI. |
|
|
Address |
Paris; France; October 2023 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICCV |
|
|
Notes |
DAG |
Approved |
no |
|
|
Call Number |
Admin @ si @ LTB2023 |
Serial |
3948 |
|
Permanent link to this record |
|
|
|
|
Author |
Yuyang Liu; Yang Cong; Dipam Goswami; Xialei Liu; Joost Van de Weijer |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Augmented Box Replay: Overcoming Foreground Shift for Incremental Object Detection |
Type ![sorted by Type field, descending order (down)](img/sort_desc.gif) |
Conference Article |
|
Year |
2023 |
Publication |
20th IEEE International Conference on Computer Vision |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
11367-11377 |
|
|
Keywords |
|
|
|
Abstract |
In incremental learning, replaying stored samples from previous tasks together with current task samples is one of the most efficient approaches to address catastrophic forgetting. However, unlike incremental classification, image replay has not been successfully applied to incremental object detection (IOD). In this paper, we identify the overlooked problem of foreground shift as the main reason for this. Foreground shift only occurs when replaying images of previous tasks and refers to the fact that their background might contain foreground objects of the current task. To overcome this problem, a novel and efficient Augmented Box Replay (ABR) method is developed that only stores and replays foreground objects and thereby circumvents the foreground shift problem. In addition, we propose an innovative Attentive RoI Distillation loss that uses spatial attention from region-of-interest (RoI) features to constrain current model to focus on the most important information from old model. ABR significantly reduces forgetting of previous classes while maintaining high plasticity in current classes. Moreover, it considerably reduces the storage requirements when compared to standard image replay. Comprehensive experiments on Pascal-VOC and COCO datasets support the state-of-the-art performance of our model. |
|
|
Address |
Paris; France; October 2023 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICCV |
|
|
Notes |
LAMP |
Approved |
no |
|
|
Call Number |
Admin @ si @ LCG2023 |
Serial |
3949 |
|
Permanent link to this record |
|
|
|
|
Author |
Guillermo Torres; Debora Gil; Antoni Rosell; S. Mena; Carles Sanchez |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Virtual Radiomics Biopsy for the Histological Diagnosis of Pulmonary Nodules |
Type ![sorted by Type field, descending order (down)](img/sort_desc.gif) |
Conference Article |
|
Year |
2023 |
Publication |
37th International Congress and Exhibition is organized by Computer Assisted Radiology and Surgery |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
Pòster |
|
|
Address |
Munich; Germany; June 2023 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CARS |
|
|
Notes |
IAM |
Approved |
no |
|
|
Call Number |
Admin @ si @ TGR2023a |
Serial |
3950 |
|
Permanent link to this record |
|
|
|
|
Author |
Sonia Baeza; Debora Gil; Carles Sanchez; Guillermo Torres; Ignasi Garcia Olive; Ignasi Guasch; Samuel Garcia Reina; Felipe Andreo; Jose Luis Mate; Jose Luis Vercher; Antonio Rosell |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Biopsia virtual radiomica para el diagnóstico histológico de nódulos pulmonares – Resultados intermedios del proyecto Radiolung |
Type ![sorted by Type field, descending order (down)](img/sort_desc.gif) |
Conference Article |
|
Year |
2023 |
Publication |
SEPAR |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
Pòster |
|
|
Address |
Granada; Spain; June 2023 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
SEPAR |
|
|
Notes |
IAM |
Approved |
no |
|
|
Call Number |
Admin @ si @ BGS2023 |
Serial |
3951 |
|
Permanent link to this record |
|
|
|
|
Author |
Debora Gil; Guillermo Torres; Carles Sanchez |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Transforming radiomic features into radiological words |
Type ![sorted by Type field, descending order (down)](img/sort_desc.gif) |
Conference Article |
|
Year |
2023 |
Publication |
IEEE International Symposium on Biomedical Imaging |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
Pòster |
|
|
Address |
Cartagena de Indias; Colombia; April 2023 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ISBI |
|
|
Notes |
IAM |
Approved |
no |
|
|
Call Number |
Admin @ si @ GTS2023 |
Serial |
3952 |
|
Permanent link to this record |
|
|
|
|
Author |
Pau Cano; Debora Gil; Eva Musulen |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Towards automatic detection of helicobacter pylori in histological samples of gastric tissue |
Type ![sorted by Type field, descending order (down)](img/sort_desc.gif) |
Conference Article |
|
Year |
2023 |
Publication |
IEEE International Symposium on Biomedical Imaging |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
Cartagena de Indias; Colombia; April 2023 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ISBI |
|
|
Notes |
IAM |
Approved |
no |
|
|
Call Number |
Admin @ si @ CGM2023 |
Serial |
3953 |
|
Permanent link to this record |
|
|
|
|
Author |
Guillermo Torres; Debora Gil; Antonio Rosell; Sonia Baeza; Carles Sanchez |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
A radiomic biopsy for virtual histology of pulmonary nodules |
Type ![sorted by Type field, descending order (down)](img/sort_desc.gif) |
Conference Article |
|
Year |
2023 |
Publication |
IEEE International Symposium on Biomedical Imaging |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
Pòster |
|
|
Address |
Cartagena de Indias; Colombia; April 2023 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ISBI |
|
|
Notes |
IAM |
Approved |
no |
|
|
Call Number |
Admin @ si @ TGR2023b |
Serial |
3954 |
|
Permanent link to this record |
|
|
|
|
Author |
Artur Xarles; Sergio Escalera; Thomas B. Moeslund; Albert Clapes |
![goto web page url](img/www.gif)
|
|
Title |
ASTRA: An Action Spotting TRAnsformer for Soccer Videos |
Type ![sorted by Type field, descending order (down)](img/sort_desc.gif) |
Conference Article |
|
Year |
2023 |
Publication |
Proceedings of the 6th International Workshop on Multimedia Content Analysis in Sports |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
93–102 |
|
|
Keywords |
|
|
|
Abstract |
In this paper, we introduce ASTRA, a Transformer-based model designed for the task of Action Spotting in soccer matches. ASTRA addresses several challenges inherent in the task and dataset, including the requirement for precise action localization, the presence of a long-tail data distribution, non-visibility in certain actions, and inherent label noise. To do so, ASTRA incorporates (a) a Transformer encoder-decoder architecture to achieve the desired output temporal resolution and to produce precise predictions, (b) a balanced mixup strategy to handle the long-tail distribution of the data, (c) an uncertainty-aware displacement head to capture the label variability, and (d) input audio signal to enhance detection of non-visible actions. Results demonstrate the effectiveness of ASTRA, achieving a tight Average-mAP of 66.82 on the test set. Moreover, in the SoccerNet 2023 Action Spotting challenge, we secure the 3rd position with an Average-mAP of 70.21 on the challenge set. |
|
|
Address |
Otawa; Canada; October 2023 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
MMSports |
|
|
Notes |
HUPBA |
Approved |
no |
|
|
Call Number |
Admin @ si @ XEM2023 |
Serial |
3970 |
|
Permanent link to this record |
|
|
|
|
Author |
Eduardo Aguilar; Bogdan Raducanu; Petia Radeva; Joost Van de Weijer |
![goto web page url](img/www.gif)
|
|
Title |
Continual Evidential Deep Learning for Out-of-Distribution Detection |
Type ![sorted by Type field, descending order (down)](img/sort_desc.gif) |
Conference Article |
|
Year |
2023 |
Publication |
Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
3444-3454 |
|
|
Keywords |
|
|
|
Abstract |
Uncertainty-based deep learning models have attracted a great deal of interest for their ability to provide accurate and reliable predictions. Evidential deep learning stands out achieving remarkable performance in detecting out-ofdistribution (OOD) data with a single deterministic neural network. Motivated by this fact, in this paper we propose the integration of an evidential deep learning method into a continual learning framework in order to perform simultaneously incremental object classification and OOD detection. Moreover, we analyze the ability of vacuity and dissonance to differentiate between in-distribution data belonging to old classes and OOD data. The proposed method 1, called CEDL, is evaluated on CIFAR-100 considering two settings consisting of 5 and 10 tasks, respectively. From the obtained results, we could appreciate that the proposed method, in addition to provide comparable results in object classification with respect to the baseline, largely outperforms OOD detection compared to several posthoc methods on three evaluation metrics: AUROC, AUPR and FPR95. |
|
|
Address |
Paris; France; October 2023 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICCVW |
|
|
Notes |
LAMP; MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ ARR2023 |
Serial |
3974 |
|
Permanent link to this record |
|
|
|
|
Author |
Alloy Das; Sanket Biswas; Umapada Pal; Josep Llados |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Diving into the Depths of Spotting Text in Multi-Domain Noisy Scenes |
Type ![sorted by Type field, descending order (down)](img/sort_desc.gif) |
Conference Article |
|
Year |
2024 |
Publication |
IEEE International Conference on Robotics and Automation in PACIFICO |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
When used in a real-world noisy environment, the capacity to generalize to multiple domains is essential for any autonomous scene text spotting system. However, existing state-of-the-art methods employ pretraining and fine-tuning strategies on natural scene datasets, which do not exploit the feature interaction across other complex domains. In this work, we explore and investigate the problem of domain-agnostic scene text spotting, i.e., training a model on multi-domain source data such that it can directly generalize to target domains rather than being specialized for a specific domain or scenario. In this regard, we present the community a text spotting validation benchmark called Under-Water Text (UWT) for noisy underwater scenes to establish an important case study. Moreover, we also design an efficient super-resolution based end-to-end transformer baseline called DA-TextSpotter which achieves comparable or superior performance over existing text spotting architectures for both regular and arbitrary-shaped scene text spotting benchmarks in terms of both accuracy and model efficiency. The dataset, code and pre-trained models will be released upon acceptance. |
|
|
Address |
Yokohama; Japan; May 2024 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICRA |
|
|
Notes |
DAG |
Approved |
no |
|
|
Call Number |
Admin @ si @ DBP2024 |
Serial |
3979 |
|
Permanent link to this record |
|
|
|
|
Author |
Lei Kang; Lichao Zhang; Dazhi Jiang |
![goto web page url](img/www.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Learning Robust Self-Attention Features for Speech Emotion Recognition with Label-Adaptive Mixup |
Type ![sorted by Type field, descending order (down)](img/sort_desc.gif) |
Conference Article |
|
Year |
2023 |
Publication |
IEEE International Conference on Acoustics, Speech and Signal Processing |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
Speech Emotion Recognition (SER) is to recognize human emotions in a natural verbal interaction scenario with machines, which is considered as a challenging problem due to the ambiguous human emotions. Despite the recent progress in SER, state-of-the-art models struggle to achieve a satisfactory performance. We propose a self-attention based method with combined use of label-adaptive mixup and center loss. By adapting label probabilities in mixup and fitting center loss to the mixup training scheme, our proposed method achieves a superior performance to the state-of-the-art methods. |
|
|
Address |
Rodhes Islands; Greece; June 2023 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICASSP |
|
|
Notes |
LAMP |
Approved |
no |
|
|
Call Number |
Admin @ si @ KZJ2023 |
Serial |
3984 |
|
Permanent link to this record |
|
|
|
|
Author |
Alloy Das; Sanket Biswas; Ayan Banerjee; Josep Llados; Umapada Pal; Saumik Bhattacharya |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Harnessing the Power of Multi-Lingual Datasets for Pre-training: Towards Enhancing Text Spotting Performance |
Type ![sorted by Type field, descending order (down)](img/sort_desc.gif) |
Conference Article |
|
Year |
2024 |
Publication |
Winter Conference on Applications of Computer Vision |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
718-728 |
|
|
Keywords |
|
|
|
Abstract |
The adaptation capability to a wide range of domains is crucial for scene text spotting models when deployed to real-world conditions. However, existing state-of-the-art (SOTA) approaches usually incorporate scene text detection and recognition simply by pretraining on natural scene text datasets, which do not directly exploit the intermediate feature representations between multiple domains. Here, we investigate the problem of domain-adaptive scene text spotting, i.e., training a model on multi-domain source data such that it can directly adapt to target domains rather than being specialized for a specific domain or scenario. Further, we investigate a transformer baseline called Swin-TESTR to focus on solving scene-text spotting for both regular and arbitrary-shaped scene text along with an exhaustive evaluation. The results clearly demonstrate the potential of intermediate representations to achieve significant performance on text spotting benchmarks across multiple domains (e.g. language, synth-to-real, and documents). both in terms of accuracy and efficiency. |
|
|
Address |
Waikoloa; Hawai; USA; January 2024 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
WACV |
|
|
Notes |
DAG |
Approved |
no |
|
|
Call Number |
Admin @ si @ DBB2024 |
Serial |
3986 |
|
Permanent link to this record |
|
|
|
|
Author |
Alex Gomez-Villa; Bartlomiej Twardowski; Kai Wang; Joost van de Weijer |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Plasticity-Optimized Complementary Networks for Unsupervised Continual Learning |
Type ![sorted by Type field, descending order (down)](img/sort_desc.gif) |
Conference Article |
|
Year |
2024 |
Publication |
Winter Conference on Applications of Computer Vision |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
1690-1700 |
|
|
Keywords |
|
|
|
Abstract |
Continuous unsupervised representation learning (CURL) research has greatly benefited from improvements in self-supervised learning (SSL) techniques. As a result, existing CURL methods using SSL can learn high-quality representations without any labels, but with a notable performance drop when learning on a many-tasks data stream. We hypothesize that this is caused by the regularization losses that are imposed to prevent forgetting, leading to a suboptimal plasticity-stability trade-off: they either do not adapt fully to the incoming data (low plasticity), or incur significant forgetting when allowed to fully adapt to a new SSL pretext-task (low stability). In this work, we propose to train an expert network that is relieved of the duty of keeping the previous knowledge and can focus on performing optimally on the new tasks (optimizing plasticity). In the second phase, we combine this new knowledge with the previous network in an adaptation-retrospection phase to avoid forgetting and initialize a new expert with the knowledge of the old network. We perform several experiments showing that our proposed approach outperforms other CURL exemplar-free methods in few- and many-task split settings. Furthermore, we show how to adapt our approach to semi-supervised continual learning (Semi-SCL) and show that we surpass the accuracy of other exemplar-free Semi-SCL methods and reach the results of some others that use exemplars. |
|
|
Address |
Waikoloa; Hawai; USA; January 2024 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
WACV |
|
|
Notes |
LAMP |
Approved |
no |
|
|
Call Number |
Admin @ si @ GTW2024 |
Serial |
3989 |
|
Permanent link to this record |