|   | 
Details
   web
Records
Author (down) David Berga; Xavier Otazu
Title A neurodynamic model of saliency prediction in v1 Type Journal Article
Year 2022 Publication Neural Computation Abbreviated Journal NEURALCOMPUT
Volume 34 Issue 2 Pages 378-414
Keywords
Abstract Lateral connections in the primary visual cortex (V1) have long been hypothesized to be responsible for several visual processing mechanisms such as brightness induction, chromatic induction, visual discomfort, and bottom-up visual attention (also named saliency). Many computational models have been developed to independently predict these and other visual processes, but no computational model has been able to reproduce all of them simultaneously. In this work, we show that a biologically plausible computational model of lateral interactions of V1 is able to simultaneously predict saliency and all the aforementioned visual processes. Our model's architecture (NSWAM) is based on Penacchio's neurodynamic model of lateral connections of V1. It is defined as a network of firing rate neurons, sensitive to visual features such as brightness, color, orientation, and scale. We tested NSWAM saliency predictions using images from several eye tracking data sets. We show that the accuracy of predictions obtained by our architecture, using shuffled metrics, is similar to other state-of-the-art computational methods, particularly with synthetic images (CAT2000-Pattern and SID4VAM) that mainly contain low-level features. Moreover, we outperform other biologically inspired saliency models that are specifically designed to exclusively reproduce saliency. We show that our biologically plausible model of lateral connections can simultaneously explain different visual processes present in V1 (without applying any type of training or optimization and keeping the same parameterization for all the visual processes). This can be useful for the definition of a unified architecture of the primary visual cortex.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes NEUROBIT; 600.128; 600.120 Approved no
Call Number Admin @ si @ BeO2022 Serial 3696
Permanent link to this record
 

 
Author (down) David Berga; Marc Masana; Joost Van de Weijer
Title Disentanglement of Color and Shape Representations for Continual Learning Type Conference Article
Year 2020 Publication ICML Workshop on Continual Learning Abbreviated Journal
Volume Issue Pages
Keywords
Abstract We hypothesize that disentangled feature representations suffer less from catastrophic forgetting. As a case study we perform explicit disentanglement of color and shape, by adjusting the network architecture. We tested classification accuracy and forgetting in a task-incremental setting with Oxford-102 Flowers dataset. We combine our method with Elastic Weight Consolidation, Learning without Forgetting, Synaptic Intelligence and Memory Aware Synapses, and show that feature disentanglement positively impacts continual learning performance.
Address Virtual; July 2020
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICMLW
Notes LAMP; 600.120 Approved no
Call Number Admin @ si @ BMW2020 Serial 3506
Permanent link to this record
 

 
Author (down) David Berga; C. Wloka; JK. Tsotsos
Title Modeling task influences for saccade sequence and visual relevance prediction Type Journal Article
Year 2019 Publication Journal of Vision Abbreviated Journal JV
Volume 19 Issue 10 Pages 106c-106c
Keywords
Abstract Previous work from Wloka et al. (2017) presented the Selective Tuning Attentive Reference model Fixation Controller (STAR-FC), an active vision model for saccade prediction. Although the model is able to efficiently predict saccades during free-viewing, it is well known that stimulus and task instructions can strongly affect eye movement patterns (Yarbus, 1967). These factors are considered in previous Selective Tuning architectures (Tsotsos and Kruijne, 2014)(Tsotsos, Kotseruba and Wloka, 2016)(Rosenfeld, Biparva & Tsotsos 2017), proposing a way to combine bottom-up and top-down contributions to fixation and saccade programming. In particular, task priming has been shown to be crucial to the deployment of eye movements, involving interactions between brain areas related to goal-directed behavior, working and long-term memory in combination with stimulus-driven eye movement neuronal correlates. Initial theories and models of these influences include (Rao, Zelinsky, Hayhoe and Ballard, 2002)(Navalpakkam and Itti, 2005)(Huang and Pashler, 2007) and show distinct ways to process the task requirements in combination with bottom-up attention. In this study we extend the STAR-FC with novel computational definitions of Long-Term Memory, Visual Task Executive and a Task Relevance Map. With these modules we are able to use textual instructions in order to guide the model to attend to specific categories of objects and/or places in the scene. We have designed our memory model by processing a hierarchy of visual features learned from salient object detection datasets. The relationship between the executive task instructions and the memory representations has been specified using a tree of semantic similarities between the learned features and the object category labels. Results reveal that by using this model, the resulting relevance maps and predicted saccades have a higher probability to fall inside the salient regions depending on the distinct task instructions.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes NEUROBIT; 600.128; 600.120 Approved no
Call Number Admin @ si @ BWT2019 Serial 3308
Permanent link to this record
 

 
Author (down) David Berga
Title Understanding Eye Movements: Psychophysics and a Model of Primary Visual Cortex Type Book Whole
Year 2019 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Humansmove their eyes in order to learn visual representations of the world. These eye movements depend on distinct factors, either by the scene that we perceive or by our own decisions. To select what is relevant to attend is part of our survival mechanisms and the way we build reality, as we constantly react both consciously and unconsciously to all the stimuli that is projected into our eyes. In this thesis we try to explain (1) how we move our eyes, (2) how to build machines that understand visual information and deploy eyemovements, and (3) how to make these machines understand tasks in order to decide for eye movements.
(1) We provided the analysis of eye movement behavior elicited by low-level feature distinctiveness with a dataset of 230 synthetically-generated image patterns. A total of 15 types of stimuli has been generated (e.g. orientation, brightness, color, size, etc.), with 7 feature contrasts for each feature category. Eye-tracking data was collected from 34 participants during the viewing of the dataset, using Free-Viewing and Visual Search task instructions. Results showed that saliency is predominantly and distinctively influenced by: 1. feature type, 2. feature contrast, 3. Temporality of fixations, 4. task difficulty and 5. center bias. From such dataset (SID4VAM), we have computed a benchmark of saliency models by testing performance using psychophysical patterns. Model performance has been evaluated considering model inspiration and consistency with human psychophysics. Our study reveals that state-of-the-art Deep Learning saliency models do not performwell with synthetic pattern images, instead, modelswith Spectral/Fourier inspiration outperform others in saliency metrics and are more consistent with human psychophysical experimentation.
(2) Computations in the primary visual cortex (area V1 or striate cortex) have long been hypothesized to be responsible, among several visual processing mechanisms, of bottom-up visual attention (also named saliency). In order to validate this hypothesis, images from eye tracking datasets have been processed with a biologically plausible model of V1 (named Neurodynamic SaliencyWaveletModel or NSWAM). Following Li’s neurodynamic model, we define V1’s lateral connections with a network of firing rate neurons, sensitive to visual features such as brightness, color, orientation and scale. Early subcortical processes (i.e. retinal and thalamic) are functionally simulated. The resulting saliency maps are generated from the model output, representing the neuronal activity of V1 projections towards brain areas involved in eye movement control. We want to pinpoint that our unified computational architecture is able to reproduce several visual processes (i.e. brightness, chromatic induction and visual discomfort) without applying any type of training or optimization and keeping the same parametrization. The model has been extended (NSWAM-CM) with an implementation of the cortical magnification function to define the retinotopical projections towards V1, processing neuronal activity for each distinct view during scene observation. Novel computational definitions of top-down inhibition (in terms of inhibition of return and selection mechanisms), are also proposed to predict attention in Free-Viewing and Visual Search conditions. Results show that our model outperforms other biologically-inpired models of saliency prediction as well as to predict visual saccade sequences, specifically for nature and synthetic images. We also show how temporal and spatial characteristics of inhibition of return can improve prediction of saccades, as well as how distinct search strategies (in terms of feature-selective or category-specific inhibition) predict attention at distinct image contexts.
(3) Although previous scanpath models have been able to efficiently predict saccades during Free-Viewing, it is well known that stimulus and task instructions can strongly affect eye movement patterns. In particular, task priming has been shown to be crucial to the deployment of eye movements, involving interactions between brain areas related to goal-directed behavior, working and long-termmemory in combination with stimulus-driven eyemovement neuronal correlates. In our latest study we proposed an extension of the Selective Tuning Attentive Reference Fixation ControllerModel based on task demands (STAR-FCT), describing novel computational definitions of Long-TermMemory, Visual Task Executive and Task Working Memory. With these modules we are able to use textual instructions in order to guide the model to attend to specific categories of objects and/or places in the scene. We have designed our memorymodel by processing a visual hierarchy of low- and high-level features. The relationship between the executive task instructions and the memory representations has been specified using a tree of semantic similarities between the learned features and the object category labels. Results reveal that by using this model, the resulting object localizationmaps and predicted saccades have a higher probability to fall inside the salient regions depending on the distinct task instructions compared to saliency.
Address July 2019
Corporate Author Thesis Ph.D. thesis
Publisher Ediciones Graficas Rey Place of Publication Editor Xavier Otazu
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-84-948531-8-0 Medium
Area Expedition Conference
Notes NEUROBIT Approved no
Call Number Admin @ si @ Ber2019 Serial 3390
Permanent link to this record
 

 
Author (down) David Augusto Rojas; Joost Van de Weijer; Theo Gevers
Title Color Edge Saliency Boosting using Natural Image Statistics Type Conference Article
Year 2010 Publication 5th European Conference on Colour in Graphics, Imaging and Vision and 12th International Symposium on Multispectral Colour Science Abbreviated Journal
Volume Issue Pages 228–234
Keywords
Abstract State of the art methods for image matching, content-based retrieval and recognition use local features. Most of these still exploit only the luminance information for detection. The color saliency boosting algorithm has provided an efficient method to exploit the saliency of color edges based on information theory. However, during the design of this algorithm, some issues were not addressed in depth: (1) The method has ignored the underlying distribution of derivatives in natural images. (2) The dependence of information content in color-boosted edges on its spatial derivatives has not been quantitatively established. (3) To evaluate luminance and color contributions to saliency of edges, a parameter gradually balancing both contributions is required.
We introduce a novel algorithm, based on the principles of independent component analysis, which models the first order derivatives of color natural images by a generalized Gaussian distribution. Furthermore, using this probability model we show that for images with a Laplacian distribution, which is a particular case of generalized Gaussian distribution, the magnitudes of color-boosted edges reflect their corresponding information content. In order to evaluate the impact of color edge saliency in real world applications, we introduce an extension of the Laplacian-of-Gaussian detector to color, and the performance for image matching is evaluated. Our experiments show that our approach provides more discriminative regions in comparison with the original detector.
Address Joensuu, Finland
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 9781617388897 Medium
Area Expedition Conference CGIV/MCS
Notes ISE Approved no
Call Number CAT @ cat @ RWG2010 Serial 1306
Permanent link to this record
 

 
Author (down) David Augusto Rojas; Fahad Shahbaz Khan; Joost Van de Weijer
Title The Impact of Color on Bag-of-Words based Object Recognition Type Conference Article
Year 2010 Publication 20th International Conference on Pattern Recognition Abbreviated Journal
Volume Issue Pages 1549–1553
Keywords
Abstract In recent years several works have aimed at exploiting color information in order to improve the bag-of-words based image representation. There are two stages in which color information can be applied in the bag-of-words framework. Firstly, feature detection can be improved by choosing highly informative color-based regions. Secondly, feature description, typically focusing on shape, can be improved with a color description of the local patches. Although both approaches have been shown to improve results the combined merits have not yet been analyzed. Therefore, in this paper we investigate the combined contribution of color to both the feature detection and extraction stages. Experiments performed on two challenging data sets, namely Flower and Pascal VOC 2009; clearly demonstrate that incorporating color in both feature detection and extraction significantly improves the overall performance.
Address Istanbul (Turkey)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1051-4651 ISBN 978-1-4244-7542-1 Medium
Area Expedition Conference ICPR
Notes Approved no
Call Number CAT @ cat @ RKW2010 Serial 1415
Permanent link to this record
 

 
Author (down) David Augusto Rojas
Title Colouring Local Feature Detection for Matching Type Report
Year 2009 Publication CVC Technical Report Abbreviated Journal
Volume 133 Issue Pages
Keywords
Abstract
Address
Corporate Author Computer Vision Center Thesis Master's thesis
Publisher Place of Publication Bellaterra, Barcelona Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes CIC Approved no
Call Number Admin @ si @ Roj2009 Serial 2392
Permanent link to this record
 

 
Author (down) David Aldavert; Ricardo Toledo; Arnau Ramisa; Ramon Lopez de Mantaras
Title Efficient Object Pixel-Level Categorization using Bag of Features: Advances in Visual Computing Type Conference Article
Year 2009 Publication 5th International Symposium on Visual Computing Abbreviated Journal
Volume 5875 Issue Pages 44–55
Keywords
Abstract In this paper we present a pixel-level object categorization method suitable to be applied under real-time constraints. Since pixels are categorized using a bag of features scheme, the major bottleneck of such an approach would be the feature pooling in local histograms of visual words. Therefore, we propose to bypass this time-consuming step and directly obtain the score from a linear Support Vector Machine classifier. This is achieved by creating an integral image of the components of the SVM which can readily obtain the classification score for any image sub-window with only 10 additions and 2 products, regardless of its size. Besides, we evaluated the performance of two efficient feature quantization methods: the Hierarchical K-Means and the Extremely Randomized Forest. All experiments have been done in the Graz02 database, showing comparable, or even better results to related work with a lower computational cost.
Address Las Vegas, USA
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0302-9743 ISBN 978-3-642-10330-8 Medium
Area Expedition Conference ISVC
Notes ADAS Approved no
Call Number Admin @ si @ ATR2009a Serial 1246
Permanent link to this record
 

 
Author (down) David Aldavert; Ricardo Toledo; Arnau Ramisa; Ramon Lopez de Mantaras
Title Visual Registration Method For A Low Cost Robot: Computer Vision Systems Type Conference Article
Year 2009 Publication 7th International Conference on Computer Vision Systems Abbreviated Journal
Volume 5815 Issue Pages 204–214
Keywords
Abstract An autonomous mobile robot must face the correspondence or data association problem in order to carry out tasks like place recognition or unknown environment mapping. In order to put into correspondence two maps, most methods estimate the transformation relating the maps from matches established between low level feature extracted from sensor data. However, finding explicit matches between features is a challenging and computationally expensive task. In this paper, we propose a new method to align obstacle maps without searching explicit matches between features. The maps are obtained from a stereo pair. Then, we use a vocabulary tree approach to identify putative corresponding maps followed by the Newton minimization algorithm to find the transformation that relates both maps. The proposed method is evaluated in a typical office environment showing good performance.
Address Belgica
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN 0302-9743 ISBN 978-3-642-04666-7 Medium
Area Expedition Conference ICVS
Notes ADAS Approved no
Call Number Admin @ si @ ATR2009b Serial 1247
Permanent link to this record
 

 
Author (down) David Aldavert; Ricardo Toledo
Title Stereo Vision Local Map Alignment for Robot Environment Mapping Type Book Chapter
Year 2008 Publication Robot Vision Second International Workshop, RobVis Abbreviated Journal
Volume 4931 Issue Pages 111–124
Keywords
Abstract
Address Auckland (New Zealand)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ADAS Approved no
Call Number Admin @ si @ AlT2008 Serial 1100
Permanent link to this record
 

 
Author (down) David Aldavert; Marçal Rusiñol; Ricardo Toledo; Josep Llados
Title Integrating Visual and Textual Cues for Query-by-String Word Spotting Type Conference Article
Year 2013 Publication 12th International Conference on Document Analysis and Recognition Abbreviated Journal
Volume Issue Pages 511 - 515
Keywords
Abstract In this paper, we present a word spotting framework that follows the query-by-string paradigm where word images are represented both by textual and visual representations. The textual representation is formulated in terms of character $n$-grams while the visual one is based on the bag-of-visual-words scheme. These two representations are merged together and projected to a sub-vector space. This transform allows to, given a textual query, retrieve word instances that were only represented by the visual modality. Moreover, this statistical representation can be used together with state-of-the-art indexation structures in order to deal with large-scale scenarios. The proposed method is evaluated using a collection of historical documents outperforming state-of-the-art performances.
Address Washington; USA; August 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1520-5363 ISBN Medium
Area Expedition Conference ICDAR
Notes DAG; ADAS; 600.045; 600.055; 600.061 Approved no
Call Number Admin @ si @ ART2013 Serial 2224
Permanent link to this record
 

 
Author (down) David Aldavert; Marçal Rusiñol; Ricardo Toledo; Josep Llados
Title A Study of Bag-of-Visual-Words Representations for Handwritten Keyword Spotting Type Journal Article
Year 2015 Publication International Journal on Document Analysis and Recognition Abbreviated Journal IJDAR
Volume 18 Issue 3 Pages 223-234
Keywords Bag-of-Visual-Words; Keyword spotting; Handwritten documents; Performance evaluation
Abstract The Bag-of-Visual-Words (BoVW) framework has gained popularity among the document image analysis community, specifically as a representation of handwritten words for recognition or spotting purposes. Although in the computer vision field the BoVW method has been greatly improved, most of the approaches in the document image analysis domain still rely on the basic implementation of the BoVW method disregarding such latest refinements. In this paper, we present a review of those improvements and its application to the keyword spotting task. We thoroughly evaluate their impact against a baseline system in the well-known George Washington dataset and compare the obtained results against nine state-of-the-art keyword spotting methods. In addition, we also compare both the baseline and improved systems with the methods presented at the Handwritten Keyword Spotting Competition 2014.
Address
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1433-2833 ISBN Medium
Area Expedition Conference
Notes DAG; ADAS; 600.055; 600.061; 601.223; 600.077; 600.097 Approved no
Call Number Admin @ si @ ART2015 Serial 2679
Permanent link to this record
 

 
Author (down) David Aldavert; Marçal Rusiñol; Ricardo Toledo
Title Automatic Static/Variable Content Separation in Administrative Document Images Type Conference Article
Year 2017 Publication 14th International Conference on Document Analysis and Recognition Abbreviated Journal
Volume Issue Pages
Keywords
Abstract In this paper we present an automatic method for separating static and variable content from administrative document images. An alignment approach is able to unsupervisedly build probabilistic templates from a set of examples of the same document kind. Such templates define which is the likelihood of every pixel of being either static or variable content. In the extraction step, the same alignment technique is used to match
an incoming image with the template and to locate the positions where variable fields appear. We validate our approach on the public NIST Structured Tax Forms Dataset.
Address Kyoto; Japan; November 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICDAR
Notes DAG; 600.084; 600.121 Approved no
Call Number Admin @ si @ ART2017 Serial 3001
Permanent link to this record
 

 
Author (down) David Aldavert; Marçal Rusiñol
Title Manuscript text line detection and segmentation using second-order derivatives analysis Type Conference Article
Year 2018 Publication 13th IAPR International Workshop on Document Analysis Systems Abbreviated Journal
Volume Issue Pages 293 - 298
Keywords text line detection; text line segmentation; text region detection; second-order derivatives
Abstract In this paper, we explore the use of second-order derivatives to detect text lines on handwritten document images. Taking advantage that the second derivative gives a minimum response when a dark linear element over a
bright background has the same orientation as the filter, we use this operator to create a map with the local orientation and strength of putative text lines in the document. Then, we detect line segments by selecting and merging the filter responses that have a similar orientation and scale. Finally, text lines are found by merging the segments that are within the same text region. The proposed segmentation algorithm, is learning-free while showing a performance similar to the state of the art methods in publicly available datasets.
Address Viena; Austria; April 2018
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference DAS
Notes DAG; 600.084; 600.129; 302.065; 600.121 Approved no
Call Number Admin @ si @ AlR2018a Serial 3104
Permanent link to this record
 

 
Author (down) David Aldavert; Marçal Rusiñol
Title Synthetically generated semantic codebook for Bag-of-Visual-Words based word spotting Type Conference Article
Year 2018 Publication 13th IAPR International Workshop on Document Analysis Systems Abbreviated Journal
Volume Issue Pages 223 - 228
Keywords Word Spotting; Bag of Visual Words; Synthetic Codebook; Semantic Information
Abstract Word-spotting methods based on the Bag-ofVisual-Words framework have demonstrated a good retrieval performance even when used in a completely unsupervised manner. Although unsupervised approaches are suitable for
large document collections due to the cost of acquiring labeled data, these methods also present some drawbacks. For instance, having to train a suitable “codebook” for a certain dataset has a high computational cost. Therefore, in
this paper we present a database agnostic codebook which is trained from synthetic data. The aim of the proposed approach is to generate a codebook where the only information required is the type of script used in the document. The use of synthetic data also allows to easily incorporate semantic
information in the codebook generation. So, the proposed method is able to determine which set of codewords have a semantic representation of the descriptor feature space. Experimental results show that the resulting codebook attains a state-of-the-art performance while having a more compact representation.
Address Viena; Austria; April 2018
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference DAS
Notes DAG; 600.084; 600.129; 600.121 Approved no
Call Number Admin @ si @ AlR2018b Serial 3105
Permanent link to this record