|
Records |
Links |
|
Author |
Arash Akbarinia; C. Alejandro Parraga |
|
|
Title |
Colour Constancy Beyond the Classical Receptive Field |
Type |
Journal Article |
|
Year |
2018 |
Publication |
IEEE Transactions on Pattern Analysis and Machine Intelligence |
Abbreviated Journal |
TPAMI |
|
|
Volume |
40 |
Issue |
9 |
Pages |
2081 - 2094 |
|
|
Keywords |
|
|
|
Abstract |
The problem of removing illuminant variations to preserve the colours of objects (colour constancy) has already been solved by the human brain using mechanisms that rely largely on centre-surround computations of local contrast. In this paper we adopt some of these biological solutions described by long known physiological findings into a simple, fully automatic, functional model (termed Adaptive Surround Modulation or ASM). In ASM, the size of a visual neuron's receptive field (RF) as well as the relationship with its surround varies according to the local contrast within the stimulus, which in turn determines the nature of the centre-surround normalisation of cortical neurons higher up in the processing chain. We modelled colour constancy by means of two overlapping asymmetric Gaussian kernels whose sizes are adapted based on the contrast of the surround pixels, resembling the change of RF size. We simulated the contrast-dependent surround modulation by weighting the contribution of each Gaussian according to the centre-surround contrast. In the end, we obtained an estimation of the illuminant from the set of the most activated RFs' outputs. Our results on three single-illuminant and one multi-illuminant benchmark datasets show that ASM is highly competitive against the state-of-the-art and it even outperforms learning-based algorithms in one case. Moreover, the robustness of our model is more tangible if we consider that our results were obtained using the same parameters for all datasets, that is, mimicking how the human visual system operates. These results might provide an insight on how dynamical adaptation mechanisms contribute to make object's colours appear constant to us. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
NEUROBIT; 600.068; 600.072 |
Approved |
no |
|
|
Call Number |
Admin @ si @ AkP2018a |
Serial |
2990 |
|
Permanent link to this record |
|
|
|
|
Author |
Xim Cerda-Company; Xavier Otazu |
|
|
Title |
Color induction in equiluminant flashed stimuli |
Type |
Journal Article |
|
Year |
2019 |
Publication |
Journal of the Optical Society of America A |
Abbreviated Journal |
JOSA A |
|
|
Volume |
36 |
Issue |
1 |
Pages |
22-31 |
|
|
Keywords |
|
|
|
Abstract |
Color induction is the influence of the surrounding color (inducer) on the perceived color of a central region. There are two different types of color induction: color contrast (the color of the central region shifts away from that of the inducer) and color assimilation (the color shifts towards the color of the inducer). Several studies on these effects have used uniform and striped surrounds, reporting color contrast and color assimilation, respectively. Other authors [J. Vis. 12(1), 22 (2012) [CrossRef] ] have studied color induction using flashed uniform surrounds, reporting that the contrast is higher for shorter flash duration. Extending their study, we present new psychophysical results using both flashed and static (i.e., non-flashed) equiluminant stimuli for both striped and uniform surrounds. Similarly to them, for uniform surround stimuli we observed color contrast, but we did not obtain the maximum contrast for the shortest (10 ms) flashed stimuli, but for 40 ms. We only observed this maximum contrast for red, green, and lime inducers, while for a purple inducer we obtained an asymptotic profile along the flash duration. For striped stimuli, we observed color assimilation only for the static (infinite flash duration) red–green surround inducers (red first inducer, green second inducer). For the other inducers’ configurations, we observed color contrast or no induction. Since other studies showed that non-equiluminant striped static stimuli induce color assimilation, our results also suggest that luminance differences could be a key factor to induce it. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
NEUROBIT; 600.120; 600.128 |
Approved |
no |
|
|
Call Number |
Admin @ si @ CeO2019 |
Serial |
3226 |
|
Permanent link to this record |
|
|
|
|
Author |
Xim Cerda-Company; C. Alejandro Parraga; Xavier Otazu |
|
|
Title |
Which tone-mapping operator is the best? A comparative study of perceptual quality |
Type |
Journal Article |
|
Year |
2018 |
Publication |
Journal of the Optical Society of America A |
Abbreviated Journal |
JOSA A |
|
|
Volume |
35 |
Issue |
4 |
Pages |
626-638 |
|
|
Keywords |
|
|
|
Abstract |
Tone-mapping operators (TMO) are designed to generate perceptually similar low-dynamic range images from high-dynamic range ones. We studied the performance of fifteen TMOs in two psychophysical experiments where observers compared the digitally-generated tone-mapped images to their corresponding physical scenes. All experiments were performed in a controlled environment and the setups were
designed to emphasize different image properties: in the first experiment we evaluated the local relationships among intensity-levels, and in the second one we evaluated global visual appearance among physical scenes and tone-mapped images, which were presented side by side. We ranked the TMOs according
to how well they reproduced the results obtained in the physical scene. Our results show that ranking position clearly depends on the adopted evaluation criteria, which implies that, in general, these tone-mapping algorithms consider either local or global image attributes but rarely both. Regarding the
question of which TMO is the best, KimKautz [1] and Krawczyk [2] obtained the better results across the different experiments. We conclude that a more thorough and standardized evaluation criteria is needed to study all the characteristics of TMOs, as there is ample room for improvement in future developments. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
NEUROBIT; 600.120; 600.128 |
Approved |
no |
|
|
Call Number |
Admin @ si @ CPO2018 |
Serial |
3088 |
|
Permanent link to this record |
|
|
|
|
Author |
David Berga; Xavier Otazu |
|
|
Title |
A neurodynamic model of saliency prediction in v1 |
Type |
Journal Article |
|
Year |
2022 |
Publication |
Neural Computation |
Abbreviated Journal |
NEURALCOMPUT |
|
|
Volume |
34 |
Issue |
2 |
Pages |
378-414 |
|
|
Keywords |
|
|
|
Abstract |
Lateral connections in the primary visual cortex (V1) have long been hypothesized to be responsible for several visual processing mechanisms such as brightness induction, chromatic induction, visual discomfort, and bottom-up visual attention (also named saliency). Many computational models have been developed to independently predict these and other visual processes, but no computational model has been able to reproduce all of them simultaneously. In this work, we show that a biologically plausible computational model of lateral interactions of V1 is able to simultaneously predict saliency and all the aforementioned visual processes. Our model's architecture (NSWAM) is based on Penacchio's neurodynamic model of lateral connections of V1. It is defined as a network of firing rate neurons, sensitive to visual features such as brightness, color, orientation, and scale. We tested NSWAM saliency predictions using images from several eye tracking data sets. We show that the accuracy of predictions obtained by our architecture, using shuffled metrics, is similar to other state-of-the-art computational methods, particularly with synthetic images (CAT2000-Pattern and SID4VAM) that mainly contain low-level features. Moreover, we outperform other biologically inspired saliency models that are specifically designed to exclusively reproduce saliency. We show that our biologically plausible model of lateral connections can simultaneously explain different visual processes present in V1 (without applying any type of training or optimization and keeping the same parameterization for all the visual processes). This can be useful for the definition of a unified architecture of the primary visual cortex. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
NEUROBIT; 600.128; 600.120 |
Approved |
no |
|
|
Call Number |
Admin @ si @ BeO2022 |
Serial |
3696 |
|
Permanent link to this record |
|
|
|
|
Author |
Lu Yu; Lichao Zhang; Joost Van de Weijer; Fahad Shahbaz Khan; Yongmei Cheng; C. Alejandro Parraga |
|
|
Title |
Beyond Eleven Color Names for Image Understanding |
Type |
Journal Article |
|
Year |
2018 |
Publication |
Machine Vision and Applications |
Abbreviated Journal |
MVAP |
|
|
Volume |
29 |
Issue |
2 |
Pages |
361-373 |
|
|
Keywords |
Color name; Discriminative descriptors; Image classification; Re-identification; Tracking |
|
|
Abstract |
Color description is one of the fundamental problems of image understanding. One of the popular ways to represent colors is by means of color names. Most existing work on color names focuses on only the eleven basic color terms of the English language. This could be limiting the discriminative power of these representations, and representations based on more color names are expected to perform better. However, there exists no clear strategy to choose additional color names. We collect a dataset of 28 additional color names. To ensure that the resulting color representation has high discriminative power we propose a method to order the additional color names according to their complementary nature with the basic color names. This allows us to compute color name representations with high discriminative power of arbitrary length. In the experiments we show that these new color name descriptors outperform the existing color name descriptor on the task of visual tracking, person re-identification and image classification. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
LAMP; NEUROBIT; 600.068; 600.109; 600.120 |
Approved |
no |
|
|
Call Number |
Admin @ si @ YYW2018 |
Serial |
3087 |
|
Permanent link to this record |