|
Records |
Links |
|
Author |
Francesc Net; Marc Folia; Pep Casals; Lluis Gomez |
![goto web page url](http://refbase.cvc.uab.es/img/www.gif)
|
|
Title |
Transductive Learning for Near-Duplicate Image Detection in Scanned Photo Collections |
Type |
Conference Article |
|
Year |
2023 |
Publication |
17th International Conference on Document Analysis and Recognition |
Abbreviated Journal |
|
|
|
Volume ![sorted by Volume (numeric) field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
14191 |
Issue |
|
Pages |
3-17 |
|
|
Keywords |
Image deduplication; Near-duplicate images detection; Transductive Learning; Photographic Archives; Deep Learning |
|
|
Abstract |
This paper presents a comparative study of near-duplicate image detection techniques in a real-world use case scenario, where a document management company is commissioned to manually annotate a collection of scanned photographs. Detecting duplicate and near-duplicate photographs can reduce the time spent on manual annotation by archivists. This real use case differs from laboratory settings as the deployment dataset is available in advance, allowing the use of transductive learning. We propose a transductive learning approach that leverages state-of-the-art deep learning architectures such as convolutional neural networks (CNNs) and Vision Transformers (ViTs). Our approach involves pre-training a deep neural network on a large dataset and then fine-tuning the network on the unlabeled target collection with self-supervised learning. The results show that the proposed approach outperforms the baseline methods in the task of near-duplicate image detection in the UKBench and an in-house private dataset. |
|
|
Address |
San Jose; CA; USA; August 2023 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICDAR |
|
|
Notes |
DAG |
Approved |
no |
|
|
Call Number |
Admin @ si @ NFC2023 |
Serial |
3859 |
|
Permanent link to this record |
|
|
|
|
Author |
George Tom; Minesh Mathew; Sergi Garcia Bordils; Dimosthenis Karatzas; CV Jawahar |
![goto web page url](http://refbase.cvc.uab.es/img/www.gif)
|
|
Title |
Reading Between the Lanes: Text VideoQA on the Road |
Type |
Conference Article |
|
Year |
2023 |
Publication |
17th International Conference on Document Analysis and Recognition |
Abbreviated Journal |
|
|
|
Volume ![sorted by Volume (numeric) field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
14192 |
Issue |
|
Pages |
137–154 |
|
|
Keywords |
VideoQA; scene text; driving videos |
|
|
Abstract |
Text and signs around roads provide crucial information for drivers, vital for safe navigation and situational awareness. Scene text recognition in motion is a challenging problem, while textual cues typically appear for a short time span, and early detection at a distance is necessary. Systems that exploit such information to assist the driver should not only extract and incorporate visual and textual cues from the video stream but also reason over time. To address this issue, we introduce RoadTextVQA, a new dataset for the task of video question answering (VideoQA) in the context of driver assistance. RoadTextVQA consists of 3, 222 driving videos collected from multiple countries, annotated with 10, 500 questions, all based on text or road signs present in the driving videos. We assess the performance of state-of-the-art video question answering models on our RoadTextVQA dataset, highlighting the significant potential for improvement in this domain and the usefulness of the dataset in advancing research on in-vehicle support systems and text-aware multimodal question answering. The dataset is available at http://cvit.iiit.ac.in/research/projects/cvit-projects/roadtextvqa. |
|
|
Address |
San Jose; CA; USA; August 2023 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICDAR |
|
|
Notes |
DAG |
Approved |
no |
|
|
Call Number |
Admin @ si @ TMG2023 |
Serial |
3906 |
|
Permanent link to this record |
|
|
|
|
Author |
Sergi Garcia Bordils; Dimosthenis Karatzas; Marçal Rusiñol |
![goto web page url](http://refbase.cvc.uab.es/img/www.gif)
|
|
Title |
Accelerating Transformer-Based Scene Text Detection and Recognition via Token Pruning |
Type |
Conference Article |
|
Year |
2023 |
Publication |
17th International Conference on Document Analysis and Recognition |
Abbreviated Journal |
|
|
|
Volume ![sorted by Volume (numeric) field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
14192 |
Issue |
|
Pages |
106-121 |
|
|
Keywords |
Scene Text Detection; Scene Text Recognition; Transformer Acceleration |
|
|
Abstract |
Scene text detection and recognition is a crucial task in computer vision with numerous real-world applications. Transformer-based approaches are behind all current state-of-the-art models and have achieved excellent performance. However, the computational requirements of the transformer architecture makes training these methods slow and resource heavy. In this paper, we introduce a new token pruning strategy that significantly decreases training and inference times without sacrificing performance, striking a balance between accuracy and speed. We have applied this pruning technique to our own end-to-end transformer-based scene text understanding architecture. Our method uses a separate detection branch to guide the pruning of uninformative image features, which significantly reduces the number of tokens at the input of the transformer. Experimental results show how our network is able to obtain competitive results on multiple public benchmarks while running at significantly higher speeds. |
|
|
Address |
San Jose; CA; USA; August 2023 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICDAR |
|
|
Notes |
DAG |
Approved |
no |
|
|
Call Number |
Admin @ si @ GKR2023a |
Serial |
3907 |
|
Permanent link to this record |
|
|
|
|
Author |
Pau Torras; Mohamed Ali Souibgui; Sanket Biswas; Alicia Fornes |
![goto web page url](http://refbase.cvc.uab.es/img/www.gif)
|
|
Title |
Segmentation-Free Alignment of Arbitrary Symbol Transcripts to Images |
Type |
Conference Article |
|
Year |
2023 |
Publication |
Document Analysis and Recognition – ICDAR 2023 Workshops |
Abbreviated Journal |
|
|
|
Volume ![sorted by Volume (numeric) field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
14193 |
Issue |
|
Pages |
83-93 |
|
|
Keywords |
Historical Manuscripts; Symbol Alignment |
|
|
Abstract |
Developing arbitrary symbol recognition systems is a challenging endeavour. Even using content-agnostic architectures such as few-shot models, performance can be substantially improved by providing a number of well-annotated examples into training. In some contexts, transcripts of the symbols are available without any position information associated to them, which enables using line-level recognition architectures. A way of providing this position information to detection-based architectures is finding systems that can align the input symbols with the transcription. In this paper we discuss some symbol alignment techniques that are suitable for low-data scenarios and provide an insight on their perceived strengths and weaknesses. In particular, we study the usage of Connectionist Temporal Classification models, Attention-Based Sequence to Sequence models and we compare them with the results obtained on a few-shot recognition system. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICDAR |
|
|
Notes |
DAG |
Approved |
no |
|
|
Call Number |
Admin @ si @ TSS2023 |
Serial |
3850 |
|
Permanent link to this record |
|
|
|
|
Author |
Adarsh Tiwari; Sanket Biswas; Josep Llados |
![goto web page url](http://refbase.cvc.uab.es/img/www.gif)
|
|
Title |
Can Pre-trained Language Models Help in Understanding Handwritten Symbols? |
Type |
Conference Article |
|
Year |
2023 |
Publication |
17th International Conference on Document Analysis and Recognition |
Abbreviated Journal |
|
|
|
Volume ![sorted by Volume (numeric) field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
14193 |
Issue |
|
Pages |
199–211 |
|
|
Keywords |
|
|
|
Abstract |
The emergence of transformer models like BERT, GPT-2, GPT-3, RoBERTa, T5 for natural language understanding tasks has opened the floodgates towards solving a wide array of machine learning tasks in other modalities like images, audio, music, sketches and so on. These language models are domain-agnostic and as a result could be applied to 1-D sequences of any kind. However, the key challenge lies in bridging the modality gap so that they could generate strong features beneficial for out-of-domain tasks. This work focuses on leveraging the power of such pre-trained language models and discusses the challenges in predicting challenging handwritten symbols and alphabets. |
|
|
Address |
San Jose; CA; USA; August 2023 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICDAR |
|
|
Notes |
DAG |
Approved |
no |
|
|
Call Number |
Admin @ si @ TBL2023 |
Serial |
3908 |
|
Permanent link to this record |
|
|
|
|
Author |
Mickael Coustaty; Alicia Fornes |
![goto web page url](http://refbase.cvc.uab.es/img/www.gif)
|
|
Title |
Document Analysis and Recognition – ICDAR 2023 Workshops |
Type |
Book Whole |
|
Year |
2023 |
Publication |
Document Analysis and Recognition – ICDAR 2023 Workshops |
Abbreviated Journal |
|
|
|
Volume ![sorted by Volume (numeric) field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
14194 |
Issue |
2 |
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
San Jose; USA; August 2023 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICDAR |
|
|
Notes |
DAG |
Approved |
no |
|
|
Call Number |
Admin @ si @ CoF2023 |
Serial |
3852 |
|
Permanent link to this record |