|   | 
Details
   web
Records
Author (down) David Fernandez; Josep Llados; Alicia Fornes
Title A graph-based approach for segmenting touching lines in historical handwritten documents Type Journal Article
Year 2014 Publication International Journal on Document Analysis and Recognition Abbreviated Journal IJDAR
Volume 17 Issue 3 Pages 293-312
Keywords Text line segmentation; Handwritten documents; Document image processing; Historical document analysis
Abstract Text line segmentation in handwritten documents is an important task in the recognition of historical documents. Handwritten document images contain text lines with multiple orientations, touching and overlapping characters between consecutive text lines and different document structures, making line segmentation a difficult task. In this paper, we present a new approach for handwritten text line segmentation solving the problems of touching components, curvilinear text lines and horizontally overlapping components. The proposed algorithm formulates line segmentation as finding the central path in the area between two consecutive lines. This is solved as a graph traversal problem. A graph is constructed using the skeleton of the image. Then, a path-finding algorithm is used to find the optimum path between text lines. The proposed algorithm has been evaluated on a comprehensive dataset consisting of five databases: ICDAR2009, ICDAR2013, UMD, the George Washington and the Barcelona Marriages Database. The proposed method outperforms the state-of-the-art considering the different types and difficulties of the benchmarking data.
Address
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1433-2833 ISBN Medium
Area Expedition Conference
Notes DAG; 600.056; 600.061; 602.006; 600.077 Approved no
Call Number Admin @ si @ FLF2014 Serial 2459
Permanent link to this record
 

 
Author (down) David Fernandez; Jon Almazan; Nuria Cirera; Alicia Fornes; Josep Llados
Title BH2M: the Barcelona Historical Handwritten Marriages database Type Conference Article
Year 2014 Publication 22nd International Conference on Pattern Recognition Abbreviated Journal
Volume Issue Pages 256 - 261
Keywords
Abstract This paper presents an image database of historical handwritten marriages records stored in the archives of Barcelona cathedral, and the corresponding meta-data addressed to evaluate the performance of document analysis algorithms. The contribution of this paper is twofold. First, it presents a complete ground truth which covers the whole pipeline of handwriting
recognition research, from layout analysis to recognition and understanding. Second, it is the first dataset in the emerging area of genealogical document analysis, where documents are manuscripts pseudo-structured with specific lexicons and the interest is beyond pure transcriptions but context dependent.
Address Creete Island; Grecia; September 2014
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1051-4651 ISBN Medium
Area Expedition Conference ICPR
Notes DAG; 600.056; 600.061; 602.006; 600.077 Approved no
Call Number Admin @ si @ FAC2014 Serial 2461
Permanent link to this record
 

 
Author (down) David Fernandez
Title Handwritten Word Spotting in Old Manuscript Images using Shape Descriptors Type Report
Year 2010 Publication CVC Technical Report Abbreviated Journal
Volume 161 Issue Pages
Keywords
Abstract
Address
Corporate Author Thesis Master's thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes DAG Approved no
Call Number Admin @ si @ Fer2010b Serial 1353
Permanent link to this record
 

 
Author (down) David Fernandez
Title Contextual Word Spotting in Historical Handwritten Documents Type Book Whole
Year 2014 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal
Volume Issue Pages
Keywords
Abstract There are countless collections of historical documents in archives and libraries that contain plenty of valuable information for historians and researchers. The extraction of this information has become a central task among the Document Analysis researches and practitioners.
There is an increasing interest to digital preserve and provide access to these kind of documents. But only the digitalization is not enough for the researchers. The extraction and/or indexation of information of this documents has had an increased interest among researchers. In many cases, and in particular in historical manuscripts, the full transcription of these documents is extremely dicult due the inherent de ciencies: poor physical preservation, di erent writing styles, obsolete languages, etc. Word spotting has become a popular an ecient alternative to full transcription. It inherently involves a high level of degradation in the images. The search of words is holistically
formulated as a visual search of a given query shape in a larger image, instead of recognising the input text and searching the query word with an ascii string comparison. But the performance of classical word spotting approaches depend on the degradation level of the images being unacceptable in many cases . In this thesis we have proposed a novel paradigm called contextual word spotting method that uses the contextual/semantic information to achieve acceptable results whereas classical word spotting does not reach. The contextual word spotting framework proposed in this thesis is a segmentation-based word spotting approach, so an ecient word segmentation is needed. Historical handwritten
documents present some common diculties that can increase the diculties the extraction of the words. We have proposed a line segmentation approach that formulates the problem as nding the central part path in the area between two consecutive lines. This is solved as a graph traversal problem. A path nding algorithm is used to nd the optimal path in a graph, previously computed, between the text lines. Once the text lines are extracted, words are localized inside the text lines using a word segmentation technique from the state of the
art. Classical word spotting approaches can be improved using the contextual information of the documents. We have introduced a new framework, oriented to handwritten documents that present a highly structure, to extract information making use of context. The framework is an ecient tool for semi-automatic transcription that uses the contextual information to achieve better results than classical word spotting approaches. The contextual information is
automatically discovered by recognizing repetitive structures and categorizing all the words according to semantic classes. The most frequent words in each semantic cluster are extracted and the same text is used to transcribe all them. The experimental results achieved in this thesis outperform classical word spotting approaches demonstrating the suitability of the proposed ensemble architecture for spotting words in historical handwritten documents using contextual information.
Address
Corporate Author Thesis Ph.D. thesis
Publisher Ediciones Graficas Rey Place of Publication Editor Josep Llados;Alicia Fornes
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-84-940902-7-1 Medium
Area Expedition Conference
Notes DAG; 600.077 Approved no
Call Number Admin @ si @ Fer2014 Serial 2573
Permanent link to this record
 

 
Author (down) David Dueñas; Mostafa Kamal; Petia Radeva
Title Efficient Deep Learning Ensemble for Skin Lesion Classification Type Conference Article
Year 2023 Publication Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications Abbreviated Journal
Volume Issue Pages 303-314
Keywords
Abstract Vision Transformers (ViTs) are deep learning techniques that have been gaining in popularity in recent years.
In this work, we study the performance of ViTs and Convolutional Neural Networks (CNNs) on skin lesions classification tasks, specifically melanoma diagnosis. We show that regardless of the performance of both architectures, an ensemble of them can improve their generalization. We also present an adaptation to the Gram-OOD* method (detecting Out-of-distribution (OOD) using Gram matrices) for skin lesion images. Moreover, the integration of super-convergence was critical to success in building models with strict computing and training time constraints. We evaluated our ensemble of ViTs and CNNs, demonstrating that generalization is enhanced by placing first in the 2019 and third in the 2020 ISIC Challenge Live Leaderboards
(available at https://challenge.isic-archive.com/leaderboards/live/).
Address Lisboa; Portugal; February 2023
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference VISIGRAPP
Notes MILAB Approved no
Call Number Admin @ si @ DKR2023 Serial 3928
Permanent link to this record
 

 
Author (down) David Curto; Albert Clapes; Javier Selva; Sorina Smeureanu; Julio C. S. Jacques Junior; David Gallardo-Pujol; Georgina Guilera; David Leiva; Thomas B. Moeslund; Sergio Escalera; Cristina Palmero
Title Dyadformer: A Multi-Modal Transformer for Long-Range Modeling of Dyadic Interactions Type Conference Article
Year 2021 Publication IEEE/CVF International Conference on Computer Vision Workshops Abbreviated Journal
Volume Issue Pages 2177-2188
Keywords
Abstract Personality computing has become an emerging topic in computer vision, due to the wide range of applications it can be used for. However, most works on the topic have focused on analyzing the individual, even when applied to interaction scenarios, and for short periods of time. To address these limitations, we present the Dyadformer, a novel multi-modal multi-subject Transformer architecture to model individual and interpersonal features in dyadic interactions using variable time windows, thus allowing the capture of long-term interdependencies. Our proposed cross-subject layer allows the network to explicitly model interactions among subjects through attentional operations. This proof-of-concept approach shows how multi-modality and joint modeling of both interactants for longer periods of time helps to predict individual attributes. With Dyadformer, we improve state-of-the-art self-reported personality inference results on individual subjects on the UDIVA v0.5 dataset.
Address Virtual; October 2021
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICCVW
Notes HUPBA; no proj Approved no
Call Number Admin @ si @ CCS2021 Serial 3648
Permanent link to this record
 

 
Author (down) David Castells; Vinh Ngo; Juan Borrego-Carazo; Marc Codina; Carles Sanchez; Debora Gil; Jordi Carrabina
Title A Survey of FPGA-Based Vision Systems for Autonomous Cars Type Journal Article
Year 2022 Publication IEEE Access Abbreviated Journal ACESS
Volume 10 Issue Pages 132525-132563
Keywords Autonomous automobile; Computer vision; field programmable gate arrays; reconfigurable architectures
Abstract On the road to making self-driving cars a reality, academic and industrial researchers are working hard to continue to increase safety while meeting technical and regulatory constraints Understanding the surrounding environment is a fundamental task in self-driving cars. It requires combining complex computer vision algorithms. Although state-of-the-art algorithms achieve good accuracy, their implementations often require powerful computing platforms with high power consumption. In some cases, the processing speed does not meet real-time constraints. FPGA platforms are often used to implement a category of latency-critical algorithms that demand maximum performance and energy efficiency. Since self-driving car computer vision functions fall into this category, one could expect to see a wide adoption of FPGAs in autonomous cars. In this paper, we survey the computer vision FPGA-based works from the literature targeting automotive applications over the last decade. Based on the survey, we identify the strengths and weaknesses of FPGAs in this domain and future research opportunities and challenges.
Address 16 December 2022
Corporate Author Thesis
Publisher IEEE Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes IAM; 600.166 Approved no
Call Number Admin @ si @ CNB2022 Serial 3760
Permanent link to this record
 

 
Author (down) David Berga; Xose R. Fernandez-Vidal; Xavier Otazu; Xose M. Pardo
Title SID4VAM: A Benchmark Dataset with Synthetic Images for Visual Attention Modeling Type Conference Article
Year 2019 Publication 18th IEEE International Conference on Computer Vision Abbreviated Journal
Volume Issue Pages 8788-8797
Keywords
Abstract A benchmark of saliency models performance with a synthetic image dataset is provided. Model performance is evaluated through saliency metrics as well as the influence of model inspiration and consistency with human psychophysics. SID4VAM is composed of 230 synthetic images, with known salient regions. Images were generated with 15 distinct types of low-level features (e.g. orientation, brightness, color, size...) with a target-distractor popout type of synthetic patterns. We have used Free-Viewing and Visual Search task instructions and 7 feature contrasts for each feature category. Our study reveals that state-ofthe-art Deep Learning saliency models do not perform well with synthetic pattern images, instead, models with Spectral/Fourier inspiration outperform others in saliency metrics and are more consistent with human psychophysical experimentation. This study proposes a new way to evaluate saliency models in the forthcoming literature, accounting for synthetic images with uniquely low-level feature contexts, distinct from previous eye tracking image datasets.
Address Seul; Corea; October 2019
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICCV
Notes NEUROBIT; 600.128 Approved no
Call Number Admin @ si @ BFO2019b Serial 3372
Permanent link to this record
 

 
Author (down) David Berga; Xose R. Fernandez-Vidal; Xavier Otazu; Victor Leboran; Xose M. Pardo
Title Measuring bottom-up visual attention in eye tracking experimentation with synthetic images Type Conference Article
Year 2019 Publication 8th Iberian Conference on Perception Abbreviated Journal
Volume Issue Pages
Keywords
Abstract A benchmark of saliency models performance with a synthetic image dataset is provided. Model performance is evaluated through saliency metrics as well as the influence of model inspiration and consistency with human psychophysics. SID4VAM is composed of 230 synthetic images, with known salient regions. Images were generated with 15 distinct types of low-level features (e.g. orientation, brightness, color, size...) with a target-distractor pop-out type of synthetic patterns. We have used Free-Viewing and Visual Search task instructions and 7 feature contrasts for each feature category. Our study reveals that state-of-the-art Deep Learning saliency models do not perform well with synthetic pattern images, instead, models with Spectral/Fourier inspiration outperform others in saliency metrics and are more consistent with human psychophysical experimentation. This study proposes a new way to evaluate saliency models in the forthcoming literature, accounting for synthetic images with uniquely low-level feature contexts, distinct from previous eye tracking image datasets.
Address San Lorenzo El Escorial; July 2019
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CIP
Notes NEUROBIT; 600.128 Approved no
Call Number Admin @ si @ BFO2019c Serial 3375
Permanent link to this record
 

 
Author (down) David Berga; Xose R. Fernandez-Vidal; Xavier Otazu; V. Leboran; Xose M. Pardo
Title Psychophysical evaluation of individual low-level feature influences on visual attention Type Journal Article
Year 2019 Publication Vision Research Abbreviated Journal VR
Volume 154 Issue Pages 60-79
Keywords Visual attention; Psychophysics; Saliency; Task; Context; Contrast; Center bias; Low-level; Synthetic; Dataset
Abstract In this study we provide the analysis of eye movement behavior elicited by low-level feature distinctiveness with a dataset of synthetically-generated image patterns. Design of visual stimuli was inspired by the ones used in previous psychophysical experiments, namely in free-viewing and visual searching tasks, to provide a total of 15 types of stimuli, divided according to the task and feature to be analyzed. Our interest is to analyze the influences of low-level feature contrast between a salient region and the rest of distractors, providing fixation localization characteristics and reaction time of landing inside the salient region. Eye-tracking data was collected from 34 participants during the viewing of a 230 images dataset. Results show that saliency is predominantly and distinctively influenced by: 1. feature type, 2. feature contrast, 3. temporality of fixations, 4. task difficulty and 5. center bias. This experimentation proposes a new psychophysical basis for saliency model evaluation using synthetic images.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes NEUROBIT; 600.128; 600.120 Approved no
Call Number Admin @ si @ BFO2019a Serial 3274
Permanent link to this record
 

 
Author (down) David Berga; Xavier Otazu; Xose R. Fernandez-Vidal; Victor Leboran; Xose M. Pardo
Title Generating Synthetic Images for Visual Attention Modeling Type Journal Article
Year 2019 Publication Perception Abbreviated Journal PER
Volume 48 Issue Pages 99
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes NEUROBIT; no menciona Approved no
Call Number Admin @ si @ BOF2019 Serial 3309
Permanent link to this record
 

 
Author (down) David Berga; Xavier Otazu
Title Computations of inhibition of return mechanisms by modulating V1 dynamics Type Conference Article
Year 2019 Publication 28th Annual Computational Neuroscience Meeting Abbreviated Journal
Volume Issue Pages
Keywords
Abstract In this study we present a unifed model of the visual cortex for predicting visual attention using real image scenes. Feedforward mechanisms from RGC and LGN have been functionally modeled using wavelet filters at distinct orientations and scales for each chromatic pathway (Magno-, Parvo-, Konio-cellular) and polarity (ON-/OFF-center), by processing image components in the CIE Lab space. In V1, we process cortical interactions with an excitatory-inhibitory network of fring rate neurons, initially proposed by (Li, 1999), later extended by (Penacchio et al. 2013). Firing rates from model’s output have been used as predictors of neuronal activity to be projected in a map in superior colliculus (with WTA-like computations), determining locations of visual fxations. These locations will be considered as already visited areas for future saccades, therefore we integrated a spatiotemporal function of inhibition of return mechanisms (where LIP/FEF is responsible) to feed to the model with spatial memory for next saccades. Foveation mechanisms have been simulated with a cortical magnifcation function, which distort spatial viewing properties for each fxation. Results show lower prediction errors than with respect no IoR cases (Fig. 1), and it is functionally consistent with human psychophysical measurements. Our model follows a biologically-constrained architecture, previously shown to reproduce visual saliency (Berga & Otazu, 2018), visual discomfort (Penacchio et al. 2016), brightness (Penacchio et al. 2013) and chromatic induction (Cerda & Otazu, 2016).
Address Barcelona; July 2019
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CNS
Notes NEUROBIT; no menciona Approved no
Call Number Admin @ si @ BeO2019a Serial 3373
Permanent link to this record
 

 
Author (down) David Berga; Xavier Otazu
Title Computational modelingof visual attention: What do we know from physiology and psychophysics? Type Conference Article
Year 2019 Publication 8th Iberian Conference on Perception Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Latest computer vision architectures use a chain of feedforward computations, mainly optimizing artificial neural networks for very specific tasks. Although their impressive performance (i.e. in saliency) using real image datasets, these models do not follow several biological principles of the human visual system (e.g. feedback and horizontal connections in cortex) and are unable to predict several visual tasks simultaneously. In this study we present biologically plausible computations from the early stages of the human visual system (i.e. retina and lateral geniculate nucleus) and lateral connections in V1. Despite the simplicity of these processes and without any type of training or optimization, simulations of firing-rate dynamics of V1 are able to predict bottom-up visual attention at distinct contexts (shown previously as well to predict visual discomfort, brightness and chromatic induction). We also show functional top-down selection mechanisms as feedback inhibition projections (i.e. prefrontal cortex for search/task-based attention and parietal area for inhibition of return). Distinct saliency model predictions are tested with eye tracking datasets in free-viewing and visual search tasks, using real images and synthetically-generated patterns. Results on predicting saliency and scanpaths show that artificial models do not outperform biologically-inspired ones (specifically for datasets that lack of common endogenous biases found in eye tracking experimentation), as well as, do not correctly predict contrast sensitivities in pop-out stimulus patterns. This work remarks the importance of considering biological principles of the visual system for building models that reproduce this (and any other) visual effects.
Address San Lorenzo El Escorial; July 2019
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CIP
Notes NEUROBIT; no menciona Approved no
Call Number Admin @ si @ BeO2019b Serial 3374
Permanent link to this record
 

 
Author (down) David Berga; Xavier Otazu
Title Computations of top-down attention by modulating V1 dynamics Type Conference Article
Year 2020 Publication Computational and Mathematical Models in Vision Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address St. Pete Beach; Florida; May 2020
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference MODVIS
Notes NEUROBIT Approved no
Call Number Admin @ si @ BeO2020a Serial 3376
Permanent link to this record
 

 
Author (down) David Berga; Xavier Otazu
Title Modeling Bottom-Up and Top-Down Attention with a Neurodynamic Model of V1 Type Journal Article
Year 2020 Publication Neurocomputing Abbreviated Journal NEUCOM
Volume 417 Issue Pages 270-289
Keywords
Abstract Previous studies suggested that lateral interactions of V1 cells are responsible, among other visual effects, of bottom-up visual attention (alternatively named visual salience or saliency). Our objective is to mimic these connections with a neurodynamic network of firing-rate neurons in order to predict visual attention. Early visual subcortical processes (i.e. retinal and thalamic) are functionally simulated. An implementation of the cortical magnification function is included to define the retinotopical projections towards V1, processing neuronal activity for each distinct view during scene observation. Novel computational definitions of top-down inhibition (in terms of inhibition of return, oculomotor and selection mechanisms), are also proposed to predict attention in Free-Viewing and Visual Search tasks. Results show that our model outpeforms other biologically inspired models of saliency prediction while predicting visual saccade sequences with the same model. We also show how temporal and spatial characteristics of saccade amplitude and inhibition of return can improve prediction of saccades, as well as how distinct search strategies (in terms of feature-selective or category-specific inhibition) can predict attention at distinct image contexts.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes NEUROBIT Approved no
Call Number Admin @ si @ BeO2020c Serial 3444
Permanent link to this record