Home | [51–60] << 61 62 63 64 65 66 67 68 69 70 >> [71–80] |
![]() |
Records | |||||
---|---|---|---|---|---|
Author | Adria Molina; Pau Riba; Lluis Gomez; Oriol Ramos Terrades; Josep Llados | ||||
Title | Date Estimation in the Wild of Scanned Historical Photos: An Image Retrieval Approach | Type | Conference Article | ||
Year | 2021 | Publication | 16th International Conference on Document Analysis and Recognition | Abbreviated Journal | |
Volume | 12822 | Issue | Pages | 306-320 | |
Keywords | |||||
Abstract | This paper presents a novel method for date estimation of historical photographs from archival sources. The main contribution is to formulate the date estimation as a retrieval task, where given a query, the retrieved images are ranked in terms of the estimated date similarity. The closer are their embedded representations the closer are their dates. Contrary to the traditional models that design a neural network that learns a classifier or a regressor, we propose a learning objective based on the nDCG ranking metric. We have experimentally evaluated the performance of the method in two different tasks: date estimation and date-sensitive image retrieval, using the DEW public database, overcoming the baseline methods. | ||||
Address ![]() |
Lausanne; Suissa; September 2021 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICDAR | ||
Notes | DAG; 600.121; 600.140; 110.312 | Approved | no | ||
Call Number | Admin @ si @ MRG2021b | Serial | 3571 | ||
Permanent link to this record | |||||
Author | Pau Riba; Adria Molina; Lluis Gomez; Oriol Ramos Terrades; Josep Llados | ||||
Title | Learning to Rank Words: Optimizing Ranking Metrics for Word Spotting | Type | Conference Article | ||
Year | 2021 | Publication | 16th International Conference on Document Analysis and Recognition | Abbreviated Journal | |
Volume | 12822 | Issue | Pages | 381–395 | |
Keywords | |||||
Abstract | In this paper, we explore and evaluate the use of ranking-based objective functions for learning simultaneously a word string and a word image encoder. We consider retrieval frameworks in which the user expects a retrieval list ranked according to a defined relevance score. In the context of a word spotting problem, the relevance score has been set according to the string edit distance from the query string. We experimentally demonstrate the competitive performance of the proposed model on query-by-string word spotting for both, handwritten and real scene word images. We also provide the results for query-by-example word spotting, although it is not the main focus of this work. | ||||
Address ![]() |
Lausanne; Suissa; September 2021 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICDAR | ||
Notes | DAG; 600.121; 600.140; 110.312 | Approved | no | ||
Call Number | Admin @ si @ RMG2021 | Serial | 3572 | ||
Permanent link to this record | |||||
Author | Sanket Biswas; Pau Riba; Josep Llados; Umapada Pal | ||||
Title | DocSynth: A Layout Guided Approach for Controllable Document Image Synthesis | Type | Conference Article | ||
Year | 2021 | Publication | 16th International Conference on Document Analysis and Recognition | Abbreviated Journal | |
Volume | 12823 | Issue | Pages | 555–568 | |
Keywords | |||||
Abstract | Despite significant progress on current state-of-the-art image generation models, synthesis of document images containing multiple and complex object layouts is a challenging task. This paper presents a novel approach, called DocSynth, to automatically synthesize document images based on a given layout. In this work, given a spatial layout (bounding boxes with object categories) as a reference by the user, our proposed DocSynth model learns to generate a set of realistic document images consistent with the defined layout. Also, this framework has been adapted to this work as a superior baseline model for creating synthetic document image datasets for augmenting real data during training for document layout analysis tasks. Different sets of learning objectives have been also used to improve the model performance. Quantitatively, we also compare the generated results of our model with real data using standard evaluation metrics. The results highlight that our model can successfully generate realistic and diverse document images with multiple objects. We also present a comprehensive qualitative analysis summary of the different scopes of synthetic image generation tasks. Lastly, to our knowledge this is the first work of its kind. | ||||
Address ![]() |
Lausanne; Suissa; September 2021 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | DAG; 600.121; 600.140; 110.312 | Approved | no | ||
Call Number | Admin @ si @ BRL2021a | Serial | 3573 | ||
Permanent link to this record | |||||
Author | Albert Suso; Pau Riba; Oriol Ramos Terrades; Josep Llados | ||||
Title | A Self-supervised Inverse Graphics Approach for Sketch Parametrization | Type | Conference Article | ||
Year | 2021 | Publication | 16th International Conference on Document Analysis and Recognition | Abbreviated Journal | |
Volume | 12916 | Issue | Pages | 28-42 | |
Keywords | |||||
Abstract | The study of neural generative models of handwritten text and human sketches is a hot topic in the computer vision field. The landmark SketchRNN provided a breakthrough by sequentially generating sketches as a sequence of waypoints, and more recent articles have managed to generate fully vector sketches by coding the strokes as Bézier curves. However, the previous attempts with this approach need them all a ground truth consisting in the sequence of points that make up each stroke, which seriously limits the datasets the model is able to train in. In this work, we present a self-supervised end-to-end inverse graphics approach that learns to embed each image to its best fit of Bézier curves. The self-supervised nature of the training process allows us to train the model in a wider range of datasets, but also to perform better after-training predictions by applying an overfitting process on the input binary image. We report qualitative an quantitative evaluations on the MNIST and the Quick, Draw! datasets. | ||||
Address ![]() |
Lausanne; Suissa; September 2021 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICDAR | ||
Notes | DAG; 600.121 | Approved | no | ||
Call Number | Admin @ si @ SRR2021 | Serial | 3675 | ||
Permanent link to this record | |||||
Author | Sanket Biswas; Pau Riba; Josep Llados; Umapada Pal | ||||
Title | Graph-Based Deep Generative Modelling for Document Layout Generation | Type | Conference Article | ||
Year | 2021 | Publication | 16th International Conference on Document Analysis and Recognition | Abbreviated Journal | |
Volume | 12917 | Issue | Pages | 525-537 | |
Keywords | |||||
Abstract | One of the major prerequisites for any deep learning approach is the availability of large-scale training data. When dealing with scanned document images in real world scenarios, the principal information of its content is stored in the layout itself. In this work, we have proposed an automated deep generative model using Graph Neural Networks (GNNs) to generate synthetic data with highly variable and plausible document layouts that can be used to train document interpretation systems, in this case, specially in digital mailroom applications. It is also the first graph-based approach for document layout generation task experimented on administrative document images, in this case, invoices. | ||||
Address ![]() |
Lausanne; Suissa; September 2021 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | DAG; 600.121; 600.140; 110.312 | Approved | no | ||
Call Number | Admin @ si @ BRL2021 | Serial | 3676 | ||
Permanent link to this record | |||||
Author | Josep Llados | ||||
Title | The 5G of Document Intelligence | Type | Conference Article | ||
Year | 2021 | Publication | 3rd Workshop on Future of Document Analysis and Recognition | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | |||||
Address ![]() |
Lausanne; Suissa; September 2021 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | FDAR | ||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @ | Serial | 3677 | ||
Permanent link to this record | |||||
Author | Josep Llados; Daniel Lopresti; Seiichi Uchida (eds) | ||||
Title | 16th International Conference, 2021, Proceedings, Part III | Type | Book Whole | ||
Year | 2021 | Publication | Document Analysis and Recognition – ICDAR 2021 | Abbreviated Journal | |
Volume | 12823 | Issue | Pages | ||
Keywords | |||||
Abstract | This four-volume set of LNCS 12821, LNCS 12822, LNCS 12823 and LNCS 12824, constitutes the refereed proceedings of the 16th International Conference on Document Analysis and Recognition, ICDAR 2021, held in Lausanne, Switzerland in September 2021. The 182 full papers were carefully reviewed and selected from 340 submissions, and are presented with 13 competition reports.
The papers are organized into the following topical sections: document analysis for literature search, document summarization and translation, multimedia document analysis, mobile text recognition, document analysis for social good, indexing and retrieval of documents, physical and logical layout analysis, recognition of tables and formulas, and natural language processing (NLP) for document understanding. |
||||
Address ![]() |
Lausanne, Switzerland, September 5-10, 2021 | ||||
Corporate Author | Thesis | ||||
Publisher | Springer Cham | Place of Publication | Editor | Josep Llados; Daniel Lopresti; Seiichi Uchida | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-3-030-86333-3 | Medium | ||
Area | Expedition | Conference | ICDAR | ||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @ | Serial | 3727 | ||
Permanent link to this record | |||||
Author | Josep Llados; Daniel Lopresti; Seiichi Uchida (eds) | ||||
Title | 16th International Conference, 2021, Proceedings, Part IV | Type | Book Whole | ||
Year | 2021 | Publication | Document Analysis and Recognition – ICDAR 2021 | Abbreviated Journal | |
Volume | 12824 | Issue | Pages | ||
Keywords | |||||
Abstract | This four-volume set of LNCS 12821, LNCS 12822, LNCS 12823 and LNCS 12824, constitutes the refereed proceedings of the 16th International Conference on Document Analysis and Recognition, ICDAR 2021, held in Lausanne, Switzerland in September 2021. The 182 full papers were carefully reviewed and selected from 340 submissions, and are presented with 13 competition reports.
The papers are organized into the following topical sections: document analysis for literature search, document summarization and translation, multimedia document analysis, mobile text recognition, document analysis for social good, indexing and retrieval of documents, physical and logical layout analysis, recognition of tables and formulas, and natural language processing (NLP) for document understanding. |
||||
Address ![]() |
Lausanne, Switzerland, September 5-10, 2021 | ||||
Corporate Author | Thesis | ||||
Publisher | Springer Cham | Place of Publication | Editor | Josep Llados; Daniel Lopresti; Seiichi Uchida | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-3-030-86336-4 | Medium | ||
Area | Expedition | Conference | ICDAR | ||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @ | Serial | 3728 | ||
Permanent link to this record | |||||
Author | Josep Llados; Daniel Lopresti; Seiichi Uchida (eds) | ||||
Title | 16th International Conference, 2021, Proceedings, Part I | Type | Book Whole | ||
Year | 2021 | Publication | Document Analysis and Recognition – ICDAR 2021 | Abbreviated Journal | |
Volume | 12821 | Issue | Pages | ||
Keywords | |||||
Abstract | This four-volume set of LNCS 12821, LNCS 12822, LNCS 12823 and LNCS 12824, constitutes the refereed proceedings of the 16th International Conference on Document Analysis and Recognition, ICDAR 2021, held in Lausanne, Switzerland in September 2021. The 182 full papers were carefully reviewed and selected from 340 submissions, and are presented with 13 competition reports.
The papers are organized into the following topical sections: historical document analysis, document analysis systems, handwriting recognition, scene text detection and recognition, document image processing, natural language processing (NLP) for document understanding, and graphics, diagram and math recognition. |
||||
Address ![]() |
Lausanne, Switzerland, September 5-10, 2021 | ||||
Corporate Author | Thesis | ||||
Publisher | Springer Cham | Place of Publication | Editor | Josep Llados; Daniel Lopresti; Seiichi Uchida | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-3-030-86548-1 | Medium | ||
Area | Expedition | Conference | ICDAR | ||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @ | Serial | 3725 | ||
Permanent link to this record | |||||
Author | Josep Llados; Daniel Lopresti; Seiichi Uchida (eds) | ||||
Title | 16th International Conference, 2021, Proceedings, Part II | Type | Book Whole | ||
Year | 2021 | Publication | Document Analysis and Recognition – ICDAR 2021 | Abbreviated Journal | |
Volume | 12822 | Issue | Pages | ||
Keywords | |||||
Abstract | This four-volume set of LNCS 12821, LNCS 12822, LNCS 12823 and LNCS 12824, constitutes the refereed proceedings of the 16th International Conference on Document Analysis and Recognition, ICDAR 2021, held in Lausanne, Switzerland in September 2021. The 182 full papers were carefully reviewed and selected from 340 submissions, and are presented with 13 competition reports.
The papers are organized into the following topical sections: document analysis for literature search, document summarization and translation, multimedia document analysis, mobile text recognition, document analysis for social good, indexing and retrieval of documents, physical and logical layout analysis, recognition of tables and formulas, and natural language processing (NLP) for document understanding. |
||||
Address ![]() |
Lausanne, Switzerland, September 5-10, 2021 | ||||
Corporate Author | Thesis | ||||
Publisher | Springer Cham | Place of Publication | Editor | Josep Llados; Daniel Lopresti; Seiichi Uchida | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-3-030-86330-2 | Medium | ||
Area | Expedition | Conference | ICDAR | ||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @ | Serial | 3726 | ||
Permanent link to this record | |||||
Author | J.M. Sanchez; X. Binefa; J.R. Kender | ||||
Title | Multiple Feature Temporal Models for Object Detection in Video. | Type | Miscellaneous | ||
Year | 2002 | Publication | Proceeding of the International Conference on Multimedia and Expo ICME 2002 | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | |||||
Address ![]() |
Lausanne | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | Admin @ si @ SBK2002b | Serial | 299 | ||
Permanent link to this record | |||||
Author | German Ros; Laura Sellart; Joanna Materzynska; David Vazquez; Antonio Lopez | ||||
Title | The SYNTHIA Dataset: A Large Collection of Synthetic Images for Semantic Segmentation of Urban Scenes | Type | Conference Article | ||
Year | 2016 | Publication | 29th IEEE Conference on Computer Vision and Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 3234-3243 | ||
Keywords | Domain Adaptation; Autonomous Driving; Virtual Data; Semantic Segmentation | ||||
Abstract | Vision-based semantic segmentation in urban scenarios is a key functionality for autonomous driving. The irruption of deep convolutional neural networks (DCNNs) allows to foresee obtaining reliable classifiers to perform such a visual task. However, DCNNs require to learn many parameters from raw images; thus, having a sufficient amount of diversified images with this class annotations is needed. These annotations are obtained by a human cumbersome labour specially challenging for semantic segmentation, since pixel-level annotations are required. In this paper, we propose to use a virtual world for automatically generating realistic synthetic images with pixel-level annotations. Then, we address the question of how useful can be such data for the task of semantic segmentation; in particular, when using a DCNN paradigm. In order to answer this question we have generated a synthetic diversified collection of urban images, named SynthCity, with automatically generated class annotations. We use SynthCity in combination with publicly available real-world urban images with manually provided annotations. Then, we conduct experiments on a DCNN setting that show how the inclusion of SynthCity in the training stage significantly improves the performance of the semantic segmentation task | ||||
Address ![]() |
Las Vegas; USA; June 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPR | ||
Notes | ADAS; 600.085; 600.082; 600.076 | Approved | no | ||
Call Number | ADAS @ adas @ RSM2016 | Serial | 2739 | ||
Permanent link to this record | |||||
Author | Cristhian A. Aguilera-Carrasco; F. Aguilera; Angel Sappa; C. Aguilera; Ricardo Toledo | ||||
Title | Learning cross-spectral similarity measures with deep convolutional neural networks | Type | Conference Article | ||
Year | 2016 | Publication | 29th IEEE Conference on Computer Vision and Pattern Recognition Worshops | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | The simultaneous use of images from different spectracan be helpful to improve the performance of many computer vision tasks. The core idea behind the usage of crossspectral approaches is to take advantage of the strengths of each spectral band providing a richer representation of a scene, which cannot be obtained with just images from one spectral band. In this work we tackle the cross-spectral image similarity problem by using Convolutional Neural Networks (CNNs). We explore three different CNN architectures to compare the similarity of cross-spectral image patches. Specifically, we train each network with images from the visible and the near-infrared spectrum, and then test the result with two public cross-spectral datasets. Experimental results show that CNN approaches outperform the current state-of-art on both cross-spectral datasets. Additionally, our experiments show that some CNN architectures are capable of generalizing between different crossspectral domains. | ||||
Address ![]() |
Las vegas; USA; June 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPRW | ||
Notes | ADAS; 600.086; 600.076 | Approved | no | ||
Call Number | Admin @ si @AAS2016 | Serial | 2809 | ||
Permanent link to this record | |||||
Author | Sergio Escalera; Mercedes Torres-Torres; Brais Martinez; Xavier Baro; Hugo Jair Escalante; Isabelle Guyon; Georgios Tzimiropoulos; Ciprian Corneanu; Marc Oliu Simón; Mohammad Ali Bagheri; Michel Valstar | ||||
Title | ChaLearn Looking at People and Faces of the World: Face AnalysisWorkshop and Challenge 2016 | Type | Conference Article | ||
Year | 2016 | Publication | 29th IEEE Conference on Computer Vision and Pattern Recognition Workshops | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | We present the 2016 ChaLearn Looking at People and Faces of the World Challenge and Workshop, which ran three competitions on the common theme of face analysis from still images. The first one, Looking at People, addressed age estimation, while the second and third competitions, Faces of the World, addressed accessory classification and smile and gender classification, respectively. We present two crowd-sourcing methodologies used to collect manual annotations. A custom-build application was used to collect and label data about the apparent age of people (as opposed to the real age). For the Faces of the World data, the citizen-science Zooniverse platform was used. This paper summarizes the three challenges and the data used, as well as the results achieved by the participants of the competitions. Details of the ChaLearn LAP FotW competitions can be found at http://gesture.chalearn.org. | ||||
Address ![]() |
Las Vegas; USA; June 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPRW | ||
Notes | HuPBA;MV; | Approved | no | ||
Call Number | ETM2016 | Serial | 2849 | ||
Permanent link to this record | |||||
Author | Jun Wan; Yibing Zhao; Shuai Zhou; Isabelle Guyon; Sergio Escalera | ||||
Title | ChaLearn Looking at People RGB-D Isolated and Continuous Datasets for Gesture Recognition | Type | Conference Article | ||
Year | 2016 | Publication | 29th IEEE Conference on Computer Vision and Pattern Recognition Worshops | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | In this paper, we present two large video multi-modal datasets for RGB and RGB-D gesture recognition: the ChaLearn LAP RGB-D Isolated Gesture Dataset (IsoGD)and the Continuous Gesture Dataset (ConGD). Both datasets are derived from the ChaLearn Gesture Dataset
(CGD) that has a total of more than 50000 gestures for the “one-shot-learning” competition. To increase the potential of the old dataset, we designed new well curated datasets composed of 249 gesture labels, and including 47933 gestures manually labeled the begin and end frames in sequences.Using these datasets we will open two competitions on the CodaLab platform so that researchers can test and compare their methods for “user independent” gesture recognition. The first challenge is designed for gesture spotting and recognition in continuous sequences of gestures while the second one is designed for gesture classification from segmented data. The baseline method based on the bag of visual words model is also presented. |
||||
Address ![]() |
Las Vegas; USA; July 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPRW | ||
Notes | HuPBA;MILAB; | Approved | no | ||
Call Number | Admin @ si @ WZZ2016 | Serial | 2771 | ||
Permanent link to this record |