|   | 
Details
   web
Records
Author Manuel Carbonell
Title Neural Information Extraction from Semi-structured Documents A Type Book Whole
Year 2020 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Sectors as fintech, legaltech or insurance process an inflow of millions of forms, invoices, id documents, claims or similar every day. Together with these, historical archives provide gigantic amounts of digitized documents containing useful information that needs to be stored in machine encoded text with a meaningful structure. This procedure, known as information extraction (IE) comprises the steps of localizing and recognizing text, identifying named entities contained in it and optionally finding relationships among its elements. In this work we explore multi-task neural models at image and graph level to solve all steps in a unified way. While doing so we find benefits and limitations of these end-to-end approaches in comparison with sequential separate methods. More specifically, we first propose a method to produce textual as well as semantic labels with a unified model from handwritten text line images. We do so with the use of a convolutional recurrent neural model trained with connectionist temporal classification to predict the textual as well as semantic information encoded in the images. Secondly, motivated by the success of this approach we investigate the unification of the localization and recognition tasks of handwritten text in full pages with an end-to-end model, observing benefits in doing so. Having two models that tackle information extraction subsequent task pairs in an end-to-end to end manner, we lastly contribute with a method to put them all together in a single neural network to solve the whole information extraction pipeline in a unified way. Doing so we observe some benefits and some limitations in the approach, suggesting that in certain cases it is beneficial to train specialized models that excel at a single challenging task of the information extraction process, as it can be the recognition of named entities or the extraction of relationships between them. For this reason we lastly study the use of the recently arrived graph neural network architectures for the semantic tasks of the information extraction process, which are recognition of named entities and relation extraction, achieving promising results on the relation extraction part.
Address
Corporate Author Thesis Ph.D. thesis
Publisher Ediciones Graficas Rey Place of Publication Editor Alicia Fornes;Mauricio Villegas;Josep Llados
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-84-122714-1-6 Medium
Area Expedition Conference
Notes DAG; 600.121 Approved no
Call Number (down) Admin @ si @ Car20 Serial 3483
Permanent link to this record
 

 
Author Cristina Cañero
Title Deformable models applied in Medical Imaging Type Report
Year 1999 Publication CVC Technical Report #33 Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address CVC (UAB)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number (down) Admin @ si @ Can1999 Serial 193
Permanent link to this record
 

 
Author Ozan Caglayan; Walid Aransa; Adrien Bardet; Mercedes Garcia-Martinez; Fethi Bougares; Loic Barrault; Marc Masana; Luis Herranz; Joost Van de Weijer
Title LIUM-CVC Submissions for WMT17 Multimodal Translation Task Type Conference Article
Year 2017 Publication 2nd Conference on Machine Translation Abbreviated Journal
Volume Issue Pages
Keywords
Abstract This paper describes the monomodal and multimodal Neural Machine Translation systems developed by LIUM and CVC for WMT17 Shared Task on Multimodal Translation. We mainly explored two multimodal architectures where either global visual features or convolutional feature maps are integrated in order to benefit from visual context. Our final systems ranked first for both En-De and En-Fr language pairs according to the automatic evaluation metrics METEOR and BLEU.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference WMT
Notes LAMP; 600.106; 600.120 Approved no
Call Number (down) Admin @ si @ CAB2017 Serial 3035
Permanent link to this record
 

 
Author Karla Lizbeth Caballero
Title Coronary Plaque Classification using Intravascular Ultrasound Images and Radio Frequency Signals Type Report
Year 2007 Publication CVC Technical Report #105 Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address CVC (UAB)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number (down) Admin @ si @ Cab2007 Serial 822
Permanent link to this record
 

 
Author Roger Max Calle Quispe; Maya Aghaei Gavari; Eduardo Aguilar Torres
Title Towards real-time accurate safety helmets detection through a deep learning-based method Type Journal
Year 2023 Publication Ingeniare. Revista chilena de ingenieria Abbreviated Journal
Volume 31 Issue 12 Pages
Keywords
Abstract Occupational safety is a fundamental activity in industries and revolves around the management of the necessary controls that must be present to mitigate occupational risks. These controls include verifying the use of Personal Protection Equipment (PPE). Within PPE, safety helmets are vital to reducing severe or fatal consequences caused by head injuries. This problem has been addressed recently by various research based on deep learning to detect the usage of safety helmets by the present people in the industrial field.

These works have achieved promising results for safety helmet detection using object detection methods from the YOLO family. In this work, we propose to analyze the performance of Scaled-YOLOv4, a novel model of the YOLO family that has yet to be previously studied for this problem. The performance of the Scaled-YOLOv4 is evaluated on two public databases, carefully selected among the previously proposed datasets for the occupational safety framework. We demonstrate the superiority of Scaled-YOLOv4 in terms of mAP and Fl-score concerning the previous works for both databases. Further, we summarize the currently available datasets for safety helmet detection purposes and discuss their suitability.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB Approved no
Call Number (down) Admin @ si @ CAA2023 Serial 3846
Permanent link to this record
 

 
Author Simone Balocco; Maria Zuluaga; Guillaume Zahnd; Su-Lin Lee; Stefanie Demirci
Title Computing and Visualization for Intravascular Imaging and Computer Assisted Stenting Type Book Whole
Year 2016 Publication Computing and Visualization for Intravascular Imaging and Computer-Assisted Stenting Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Elsevier Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 9780128110188 Medium
Area Expedition Conference
Notes MILAB Approved no
Call Number (down) Admin @ si @ BZZ2016 Serial 2821
Permanent link to this record
 

 
Author David Berga; C. Wloka; JK. Tsotsos
Title Modeling task influences for saccade sequence and visual relevance prediction Type Journal Article
Year 2019 Publication Journal of Vision Abbreviated Journal JV
Volume 19 Issue 10 Pages 106c-106c
Keywords
Abstract Previous work from Wloka et al. (2017) presented the Selective Tuning Attentive Reference model Fixation Controller (STAR-FC), an active vision model for saccade prediction. Although the model is able to efficiently predict saccades during free-viewing, it is well known that stimulus and task instructions can strongly affect eye movement patterns (Yarbus, 1967). These factors are considered in previous Selective Tuning architectures (Tsotsos and Kruijne, 2014)(Tsotsos, Kotseruba and Wloka, 2016)(Rosenfeld, Biparva & Tsotsos 2017), proposing a way to combine bottom-up and top-down contributions to fixation and saccade programming. In particular, task priming has been shown to be crucial to the deployment of eye movements, involving interactions between brain areas related to goal-directed behavior, working and long-term memory in combination with stimulus-driven eye movement neuronal correlates. Initial theories and models of these influences include (Rao, Zelinsky, Hayhoe and Ballard, 2002)(Navalpakkam and Itti, 2005)(Huang and Pashler, 2007) and show distinct ways to process the task requirements in combination with bottom-up attention. In this study we extend the STAR-FC with novel computational definitions of Long-Term Memory, Visual Task Executive and a Task Relevance Map. With these modules we are able to use textual instructions in order to guide the model to attend to specific categories of objects and/or places in the scene. We have designed our memory model by processing a hierarchy of visual features learned from salient object detection datasets. The relationship between the executive task instructions and the memory representations has been specified using a tree of semantic similarities between the learned features and the object category labels. Results reveal that by using this model, the resulting relevance maps and predicted saccades have a higher probability to fall inside the salient regions depending on the distinct task instructions.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes NEUROBIT; 600.128; 600.120 Approved no
Call Number (down) Admin @ si @ BWT2019 Serial 3308
Permanent link to this record
 

 
Author Marco Buzzelli; Joost Van de Weijer; Raimondo Schettini
Title Learning Illuminant Estimation from Object Recognition Type Conference Article
Year 2018 Publication 25th International Conference on Image Processing Abbreviated Journal
Volume Issue Pages 3234 - 3238
Keywords Illuminant estimation; computational color constancy; semi-supervised learning; deep learning; convolutional neural networks
Abstract In this paper we present a deep learning method to estimate the illuminant of an image. Our model is not trained with illuminant annotations, but with the objective of improving performance on an auxiliary task such as object recognition. To the best of our knowledge, this is the first example of a deep
learning architecture for illuminant estimation that is trained without ground truth illuminants. We evaluate our solution on standard datasets for color constancy, and compare it with state of the art methods. Our proposal is shown to outperform most deep learning methods in a cross-dataset evaluation
setup, and to present competitive results in a comparison with parametric solutions.
Address Athens; Greece; October 2018
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICIP
Notes LAMP; 600.109; 600.120 Approved no
Call Number (down) Admin @ si @ BWS2018 Serial 3157
Permanent link to this record
 

 
Author Jorge Bernal; Fernando Vilariño; F. Javier Sanchez; M. Arnold; Anarta Ghosh; Gerard Lacey
Title Experts vs Novices: Applying Eye-tracking Methodologies in Colonoscopy Video Screening for Polyp Search Type Conference Article
Year 2014 Publication 2014 Symposium on Eye Tracking Research and Applications Abbreviated Journal
Volume Issue Pages 223-226
Keywords
Abstract We present in this paper a novel study aiming at identifying the differences in visual search patterns between physicians of diverse levels of expertise during the screening of colonoscopy videos. Physicians were clustered into two groups -experts and novices- according to the number of procedures performed, and fixations were captured by an eye-tracker device during the task of polyp search in different video sequences. These fixations were integrated into heat maps, one for each cluster. The obtained maps were validated over a ground truth consisting of a mask of the polyp, and the comparison between experts and novices was performed by using metrics such as reaction time, dwelling time and energy concentration ratio. Experimental results show a statistically significant difference between experts and novices, and the obtained maps show to be a useful tool for the characterisation of the behaviour of each group.
Address USA; March 2014
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-1-4503-2751-0 Medium
Area Expedition Conference ETRA
Notes MV; 600.047; 600.060;SIAI Approved no
Call Number (down) Admin @ si @ BVS2014 Serial 2448
Permanent link to this record
 

 
Author Jorge Bernal; Fernando Vilariño; F. Javier Sanchez
Title Feature Detectors and Feature Descriptors: Where We Are Now Type Report
Year 2010 Publication CVC Technical Report Abbreviated Journal
Volume 154 Issue Pages
Keywords
Abstract Feature Detection and Feature Description are clearly nowadays topics. Many Computer Vision applications rely on the use of several of these techniques in order to extract the most significant aspects of an image so they can help in some tasks such as image retrieval, image registration, object recognition, object categorization and texture classification, among others. In this paper we define what Feature Detection and Description are and then we present an extensive collection of several methods in order to show the different techniques that are being used right now. The aim of this report is to provide a glimpse of what is being used currently in these fields and to serve as a starting point for future endeavours.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area 800 Expedition Conference
Notes MV;SIAI Approved no
Call Number (down) Admin @ si @ BVS2010; IAM @ iam @ BVS2010 Serial 1348
Permanent link to this record
 

 
Author Chris Bahnsen; David Vazquez; Antonio Lopez; Thomas B. Moeslund
Title Learning to Remove Rain in Traffic Surveillance by Using Synthetic Data Type Conference Article
Year 2019 Publication 14th International Conference on Computer Vision Theory and Applications Abbreviated Journal
Volume Issue Pages 123-130
Keywords Rain Removal; Traffic Surveillance; Image Denoising
Abstract Rainfall is a problem in automated traffic surveillance. Rain streaks occlude the road users and degrade the overall visibility which in turn decrease object detection performance. One way of alleviating this is by artificially removing the rain from the images. This requires knowledge of corresponding rainy and rain-free images. Such images are often produced by overlaying synthetic rain on top of rain-free images. However, this method fails to incorporate the fact that rain fall in the entire three-dimensional volume of the scene. To overcome this, we introduce training data from the SYNTHIA virtual world that models rain streaks in the entirety of a scene. We train a conditional Generative Adversarial Network for rain removal and apply it on traffic surveillance images from SYNTHIA and the AAU RainSnow datasets. To measure the applicability of the rain-removed images in a traffic surveillance context, we run the YOLOv2 object detection algorithm on the original and rain-removed frames. The results on SYNTHIA show an 8% increase in detection accuracy compared to the original rain image. Interestingly, we find that high PSNR or SSIM scores do not imply good object detection performance.
Address Praga; Czech Republic; February 2019
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference VISIGRAPP
Notes ADAS; 600.118 Approved no
Call Number (down) Admin @ si @ BVL2019 Serial 3256
Permanent link to this record
 

 
Author Jorge Bernal; Nima Tajkbaksh; F. Javier Sanchez; Bogdan J. Matuszewski; Hao Chen; Lequan Yu; Quentin Angermann; Olivier Romain; Bjorn Rustad; Ilangko Balasingham; Konstantin Pogorelov; Sungbin Choi; Quentin Debard; Lena Maier Hein; Stefanie Speidel; Danail Stoyanov; Patrick Brandao; Henry Cordova; Cristina Sanchez Montes; Suryakanth R. Gurudu; Gloria Fernandez Esparrach; Xavier Dray; Jianming Liang; Aymeric Histace
Title Comparative Validation of Polyp Detection Methods in Video Colonoscopy: Results from the MICCAI 2015 Endoscopic Vision Challenge Type Journal Article
Year 2017 Publication IEEE Transactions on Medical Imaging Abbreviated Journal TMI
Volume 36 Issue 6 Pages 1231 - 1249
Keywords Endoscopic vision; Polyp Detection; Handcrafted features; Machine Learning; Validation Framework
Abstract Colonoscopy is the gold standard for colon cancer screening though still some polyps are missed, thus preventing early disease detection and treatment. Several computational systems have been proposed to assist polyp detection during colonoscopy but so far without consistent evaluation. The lack
of publicly available annotated databases has made it difficult to compare methods and to assess if they achieve performance levels acceptable for clinical use. The Automatic Polyp Detection subchallenge, conducted as part of the Endoscopic Vision Challenge (http://endovis.grand-challenge.org) at the international conference on Medical Image Computing and Computer Assisted
Intervention (MICCAI) in 2015, was an effort to address this need. In this paper, we report the results of this comparative evaluation of polyp detection methods, as well as describe additional experiments to further explore differences between methods. We define performance metrics and provide evaluation databases that allow comparison of multiple methodologies. Results show that convolutional neural networks (CNNs) are the state of the art. Nevertheless it is also demonstrated that combining different methodologies can lead to an improved overall performance.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MV; 600.096; 600.075 Approved no
Call Number (down) Admin @ si @ BTS2017 Serial 2949
Permanent link to this record
 

 
Author Claudio Baecchi; Francesco Turchini; Lorenzo Seidenari; Andrew Bagdanov; Alberto del Bimbo
Title Fisher vectors over random density forest for object recognition Type Conference Article
Year 2014 Publication 22nd International Conference on Pattern Recognition Abbreviated Journal
Volume Issue Pages 4328-4333
Keywords
Abstract
Address Stockholm; Sweden; August 2014
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICPR
Notes LAMP; 600.079 Approved no
Call Number (down) Admin @ si @ BTS2014 Serial 2518
Permanent link to this record
 

 
Author Ali Furkan Biten; R. Tito; Andres Mafla; Lluis Gomez; Marçal Rusiñol; M. Mathew; C.V. Jawahar; Ernest Valveny; Dimosthenis Karatzas
Title ICDAR 2019 Competition on Scene Text Visual Question Answering Type Conference Article
Year 2019 Publication 15th International Conference on Document Analysis and Recognition Abbreviated Journal
Volume Issue Pages 1563-1570
Keywords
Abstract This paper presents final results of ICDAR 2019 Scene Text Visual Question Answering competition (ST-VQA). ST-VQA introduces an important aspect that is not addressed by any Visual Question Answering system up to date, namely the incorporation of scene text to answer questions asked about an image. The competition introduces a new dataset comprising 23,038 images annotated with 31,791 question / answer pairs where the answer is always grounded on text instances present in the image. The images are taken from 7 different public computer vision datasets, covering a wide range of scenarios. The competition was structured in three tasks of increasing difficulty, that require reading the text in a scene and understanding it in the context of the scene, to correctly answer a given question. A novel evaluation metric is presented, which elegantly assesses both key capabilities expected from an optimal model: text recognition and image understanding. A detailed analysis of results from different participants is showcased, which provides insight into the current capabilities of VQA systems that can read. We firmly believe the dataset proposed in this challenge will be an important milestone to consider towards a path of more robust and general models that can exploit scene text to achieve holistic image understanding.
Address Sydney; Australia; September 2019
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICDAR
Notes DAG; 600.129; 601.338; 600.121 Approved no
Call Number (down) Admin @ si @ BTM2019c Serial 3286
Permanent link to this record
 

 
Author Ali Furkan Biten; R. Tito; Andres Mafla; Lluis Gomez; Marçal Rusiñol; C.V. Jawahar; Ernest Valveny; Dimosthenis Karatzas
Title Scene Text Visual Question Answering Type Conference Article
Year 2019 Publication 18th IEEE International Conference on Computer Vision Abbreviated Journal
Volume Issue Pages 4291-4301
Keywords
Abstract Current visual question answering datasets do not consider the rich semantic information conveyed by text within an image. In this work, we present a new dataset, ST-VQA, that aims to highlight the importance of exploiting highlevel semantic information present in images as textual cues in the Visual Question Answering process. We use this dataset to define a series of tasks of increasing difficulty for which reading the scene text in the context provided by the visual information is necessary to reason and generate an appropriate answer. We propose a new evaluation metric for these tasks to account both for reasoning errors as well as shortcomings of the text recognition module. In addition we put forward a series of baseline methods, which provide further insight to the newly released dataset, and set the scene for further research.
Address Seul; Corea; October 2019
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICCV
Notes DAG; 600.129; 600.135; 601.338; 600.121 Approved no
Call Number (down) Admin @ si @ BTM2019b Serial 3285
Permanent link to this record