toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Emanuele Vivoli; Ali Furkan Biten; Andres Mafla; Dimosthenis Karatzas; Lluis Gomez edit   pdf
url  doi
openurl 
  Title MUST-VQA: MUltilingual Scene-text VQA Type Conference Article
  Year 2022 Publication (down) Proceedings European Conference on Computer Vision Workshops Abbreviated Journal  
  Volume 13804 Issue Pages 345–358  
  Keywords Visual question answering; Scene text; Translation robustness; Multilingual models; Zero-shot transfer; Power of language models  
  Abstract In this paper, we present a framework for Multilingual Scene Text Visual Question Answering that deals with new languages in a zero-shot fashion. Specifically, we consider the task of Scene Text Visual Question Answering (STVQA) in which the question can be asked in different languages and it is not necessarily aligned to the scene text language. Thus, we first introduce a natural step towards a more generalized version of STVQA: MUST-VQA. Accounting for this, we discuss two evaluation scenarios in the constrained setting, namely IID and zero-shot and we demonstrate that the models can perform on a par on a zero-shot setting. We further provide extensive experimentation and show the effectiveness of adapting multilingual language models into STVQA tasks.  
  Address Tel-Aviv; Israel; October 2022  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ECCVW  
  Notes DAG; 302.105; 600.155; 611.002 Approved no  
  Call Number Admin @ si @ VBM2022 Serial 3770  
Permanent link to this record
 

 
Author Sergi Garcia Bordils; Andres Mafla; Ali Furkan Biten; Oren Nuriel; Aviad Aberdam; Shai Mazor; Ron Litman; Dimosthenis Karatzas edit   pdf
url  doi
openurl 
  Title Out-of-Vocabulary Challenge Report Type Conference Article
  Year 2022 Publication (down) Proceedings European Conference on Computer Vision Workshops Abbreviated Journal  
  Volume 13804 Issue Pages 359–375  
  Keywords  
  Abstract This paper presents final results of the Out-Of-Vocabulary 2022 (OOV) challenge. The OOV contest introduces an important aspect that is not commonly studied by Optical Character Recognition (OCR) models, namely, the recognition of unseen scene text instances at training time. The competition compiles a collection of public scene text datasets comprising of 326,385 images with 4,864,405 scene text instances, thus covering a wide range of data distributions. A new and independent validation and test set is formed with scene text instances that are out of vocabulary at training time. The competition was structured in two tasks, end-to-end and cropped scene text recognition respectively. A thorough analysis of results from baselines and different participants is presented. Interestingly, current state-of-the-art models show a significant performance gap under the newly studied setting. We conclude that the OOV dataset proposed in this challenge will be an essential area to be explored in order to develop scene text models that achieve more robust and generalized predictions.  
  Address Tel-Aviv; Israel; October 2022  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ECCVW  
  Notes DAG; 600.155; 302.105; 611.002 Approved no  
  Call Number Admin @ si @ GMB2022 Serial 3771  
Permanent link to this record
 

 
Author Ramon Baldrich; Ricardo Toledo; Ernest Valveny; Maria Vanrell edit  openurl
  Title Perceptual Colour Image Segmentation. Type Miscellaneous
  Year 2002 Publication (down) Proceeding of the Second IASTED International Conference Visualization, Imaging and Image Proceesing VIIP 2002: 145–150. Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes DAG;CIC;ADAS Approved no  
  Call Number CAT @ cat @ BTV2002 Serial 290  
Permanent link to this record
 

 
Author Ernest Valveny; Ricardo Toledo; Ramon Baldrich; Enric Marti edit  openurl
  Title Combining recognition-based in segmentation-based approaches for graphic symol recognition using deformable template matching Type Conference Article
  Year 2002 Publication (down) Proceeding of the Second IASTED International Conference Visualization, Imaging and Image Proceesing VIIP 2002 Abbreviated Journal  
  Volume Issue Pages 502–507  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes DAG;RV;CAT;IAM;CIC;ADAS Approved no  
  Call Number IAM @ iam @ VTB2002 Serial 1660  
Permanent link to this record
 

 
Author Josep Llados; Enric Marti; Jaime Lopez-Krahe edit   pdf
doi  openurl
  Title A Hough-based method for hatched pattern detection in maps and diagrams Type Conference Article
  Year 1999 Publication (down) Proceeding of the Fifth Int. Conf. Document Analysis and Recognition ICDAR ’99 Abbreviated Journal  
  Volume Issue Pages 479-482  
  Keywords  
  Abstract A hatched area is characterized by a set of parallel straight lines placed at regular intervals. In this paper, a Hough-based schema is introduced to recognize hatched areas in technical documents from attributed graph structures representing the document once it has been vectorized. Defining a Hough-based transform from a graph instead of the raster image allows to drastically reduce the processing time and, second, to obtain more reliable results because straight lines have already been detected in the vectorization step. A second advantage of the proposed method is that no assumptions must be made a priori about the slope and frequency of hatching patterns, but they are computed in run time for each hatched area.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes DAG;IAM; Approved no  
  Call Number IAM @ iam @ LIM1999b Serial 1580  
Permanent link to this record
 

 
Author Maria Vanrell; Felipe Lumbreras; A. Pujol; Ramon Baldrich; Josep Llados; Juan J. Villanueva edit  openurl
  Title Colour Normalisation Based on Background Information. Type Miscellaneous
  Year 2001 Publication (down) Proceeding ICIP 2001, IEEE International Conference on Image Processing Abbreviated Journal ICIP 2001  
  Volume Issue 1 Pages 874–877  
  Keywords  
  Abstract  
  Address Grecia.  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS;DAG;CIC Approved no  
  Call Number ADAS @ adas @ VLP2001 Serial 167  
Permanent link to this record
 

 
Author Ernest Valveny; Enric Marti edit   pdf
doi  openurl
  Title Learning of structural descriptions of graphic symbols using deformable template matching Type Conference Article
  Year 2001 Publication (down) Proc. Sixth Int Document Analysis and Recognition Conf Abbreviated Journal  
  Volume Issue Pages 455-459  
  Keywords  
  Abstract Accurate symbol recognition in graphic documents needs an accurate representation of the symbols to be recognized. If structural approaches are used for recognition, symbols have to be described in terms of their shape, using structural relationships among extracted features. Unlike statistical pattern recognition, in structural methods, symbols are usually manually defined from expertise knowledge, and not automatically infered from sample images. In this work we explain one approach to learn from examples a representative structural description of a symbol, thus providing better information about shape variability. The description of a symbol is based on a probabilistic model. It consists of a set of lines described by the mean and the variance of line parameters, respectively providing information about the model of the symbol, and its shape variability. The representation of each image in the sample set as a set of lines is achieved using deformable template matching.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes DAG;IAM; Approved no  
  Call Number IAM @ iam @ VMA2001 Serial 1654  
Permanent link to this record
 

 
Author Ernest Valveny; Enric Marti edit   pdf
doi  isbn
openurl 
  Title Hand-drawn symbol recognition in graphic documents using deformable template matching and a Bayesian framework Type Conference Article
  Year 2000 Publication (down) Proc. 15th Int Pattern Recognition Conf Abbreviated Journal  
  Volume 2 Issue Pages 239-242  
  Keywords  
  Abstract Hand-drawn symbols can take many different and distorted shapes from their ideal representation. Then, very flexible methods are needed to be able to handle unconstrained drawings. We propose here to extend our previous work in hand-drawn symbol recognition based on a Bayesian framework and deformable template matching. This approach gets flexibility enough to fit distorted shapes in the drawing while keeping fidelity to the ideal shape of the symbol. In this work, we define the similarity measure between an image and a symbol based on the distance from every pixel in the image to the lines in the symbol. Matching is carried out using an implementation of the EM algorithm. Thus, we can improve recognition rates and computation time with respect to our previous formulation based on a simulated annealing algorithm.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 0-7695-0750-6 Medium  
  Area Expedition Conference  
  Notes DAG;IAM; Approved no  
  Call Number IAM @ iam @ VAM2000 Serial 1656  
Permanent link to this record
 

 
Author Oriol Ramos Terrades edit  openurl
  Title Linear Combination of Multiresolution Descriptors: Application to Graphics Recognition Type Book Whole
  Year 2006 Publication (down) PhD Thesis, Universitat Autonoma de Barcelona-CVC & Universite Nancy 2 Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis Ph.D. thesis  
  Publisher Place of Publication Editor Salvatore Antoine Tabbone;Ernest Valveny  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes DAG Approved no  
  Call Number DAG @ dag @ Ram2006 Serial 713  
Permanent link to this record
 

 
Author Dena Bazazian edit  isbn
openurl 
  Title Fully Convolutional Networks for Text Understanding in Scene Images Type Book Whole
  Year 2018 Publication (down) PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Text understanding in scene images has gained plenty of attention in the computer vision community and it is an important task in many applications as text carries semantically rich information about scene content and context. For instance, reading text in a scene can be applied to autonomous driving, scene understanding or assisting visually impaired people. The general aim of scene text understanding is to localize and recognize text in scene images. Text regions are first localized in the original image by a trained detector model and afterwards fed into a recognition module. The tasks of localization and recognition are highly correlated since an inaccurate localization can affect the recognition task.
The main purpose of this thesis is to devise efficient methods for scene text understanding. We investigate how the latest results on deep learning can advance text understanding pipelines. Recently, Fully Convolutional Networks (FCNs) and derived methods have achieved a significant performance on semantic segmentation and pixel level classification tasks. Therefore, we took benefit of the strengths of FCN approaches in order to detect text in natural scenes. In this thesis we have focused on two challenging tasks of scene text understanding which are Text Detection and Word Spotting. For the task of text detection, we have proposed an efficient text proposal technique in scene images. We have considered the Text Proposals method as the baseline which is an approach to reduce the search space of possible text regions in an image. In order to improve the Text Proposals method we combined it with Fully Convolutional Networks to efficiently reduce the number of proposals while maintaining the same level of accuracy and thus gaining a significant speed up. Our experiments demonstrate that this text proposal approach yields significantly higher recall rates than the line based text localization techniques, while also producing better-quality localization. We have also applied this technique on compressed images such as videos from wearable egocentric cameras. For the task of word spotting, we have introduced a novel mid-level word representation method. We have proposed a technique to create and exploit an intermediate representation of images based on text attributes which roughly correspond to character probability maps. Our representation extends the concept of Pyramidal Histogram Of Characters (PHOC) by exploiting Fully Convolutional Networks to derive a pixel-wise mapping of the character distribution within candidate word regions. We call this representation the Soft-PHOC. Furthermore, we show how to use Soft-PHOC descriptors for word spotting tasks through an efficient text line proposal algorithm. To evaluate the detected text, we propose a novel line based evaluation along with the classic bounding box based approach. We test our method on incidental scene text images which comprises real-life scenarios such as urban scenes. The importance of incidental scene text images is due to the complexity of backgrounds, perspective, variety of script and language, short text and little linguistic context. All of these factors together makes the incidental scene text images challenging.
 
  Address November 2018  
  Corporate Author Thesis Ph.D. thesis  
  Publisher Ediciones Graficas Rey Place of Publication Editor Dimosthenis Karatzas;Andrew Bagdanov  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-84-948531-1-1 Medium  
  Area Expedition Conference  
  Notes DAG; 600.121 Approved no  
  Call Number Admin @ si @ Baz2018 Serial 3220  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: