toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links (up)
Author Sergio Escalera; Xavier Baro; Jordi Vitria; Petia Radeva edit  doi
isbn  openurl
  Title Text Detection in Urban Scenes (video sample) Type Conference Article
  Year 2009 Publication 12th International Conference of the Catalan Association for Artificial Intelligence Abbreviated Journal  
  Volume 202 Issue Pages 35–44  
  Keywords  
  Abstract Abstract. Text detection in urban scenes is a hard task due to the high variability of text appearance: different text fonts, changes in the point of view, or partial occlusion are just a few problems. Text detection can be specially suited for georeferencing business, navigation, tourist assistance, or to help visual impaired people. In this paper, we propose a general methodology to deal with the problem of text detection in outdoor scenes. The method is based on learning spatial information of gradient based features and Census Transform images using a cascade of classifiers. The method is applied in the context of Mobile Mapping systems, where a mobile vehicle captures urban image sequences. Moreover, a cover data set is presented and tested with the new methodology. The results show high accuracy when detecting multi-linear text regions with high variability of appearance, at same time that it preserves a low false alarm rate compared to classical approaches  
  Address Cardona (Spain)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-60750-061-2 Medium  
  Area Expedition Conference CCIA  
  Notes OR;MILAB;HuPBA;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ EBV2009 Serial 1181  
Permanent link to this record
 

 
Author Xavier Baro; Sergio Escalera; Petia Radeva; Jordi Vitria edit  doi
isbn  openurl
  Title Generic Object Recognition in Urban Image Databases Type Conference Article
  Year 2009 Publication 12th International Conference of the Catalan Association for Artificial Intelligence Abbreviated Journal  
  Volume 202 Issue Pages 27-34  
  Keywords  
  Abstract In this paper we propose the construction of a visual content layer which describes the visual appearance of geographic locations in a city. We captured, by means of a Mobile Mapping system, a huge set of georeferenced images (>500K) which cover the whole city of Barcelona. For each image, hundreds of region descriptions are computed off-line and described as a hash code. All this information is extracted without an object of reference, which allows to search for any type of objects using their visual appearance. A new Visual Content layer is built over Google Maps, allowing the object recognition information to be organized and fused with other content, like satellite images, street maps, and business locations.  
  Address Cardona (Spain)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-60750-061-2 Medium  
  Area Expedition Conference CCIA  
  Notes OR;MILAB;HuPBA;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ VER2009 Serial 1183  
Permanent link to this record
 

 
Author Francesco Ciompi; Oriol Pujol; O. Rodriguez-Leor; Angel Serrano; J. Mauri; Petia Radeva edit  doi
isbn  openurl
  Title On in-vitro and in-vivo IVUS data fusion Type Conference Article
  Year 2009 Publication 12th International Conference of the Catalan Association for Artificial Intelligence Abbreviated Journal  
  Volume 202 Issue Pages 147-156  
  Keywords  
  Abstract The design and the validation of an automatic plaque characterization technique based on Intravascular Ultrasound (IVUS) usually requires a data ground-truth. The histological analysis of post-mortem coronary arteries is commonly assumed as the state-of-the-art process for the extraction of a reliable data-set of atherosclerotic plaques. Unfortunately, the amount of data provided by this technique is usually few, due to the difficulties in collecting post-mortem cases and phenomena of tissue spoiling during histological analysis. In this paper we tackle the process of fusing in-vivo and in-vitro IVUS data starting with the analysis of recently proposed approaches for the creation of an enhanced IVUS data-set; furthermore, we propose a new approach, named pLDS, based on semi-supervised learning with a data selection criterion. The enhanced data-set obtained by each one of the analyzed approaches is used to train a classifier for tissue characterization purposes. Finally, the discriminative power of each classifier is quantitatively assessed and compared by classifying a data-set of validated in-vitro IVUS data.  
  Address Cardona (Spain)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-60750-061-2 Medium  
  Area Expedition Conference CCIA  
  Notes MILAB;HuPBA Approved no  
  Call Number BCNPCL @ bcnpcl @ CPR2009d Serial 1204  
Permanent link to this record
 

 
Author Carme Julia; Angel Sappa; Felipe Lumbreras; Joan Serrat; Antonio Lopez edit   pdf
doi  openurl
  Title Predicting Missing Ratings in Recommender Systems: Adapted Factorization Approach Type Journal Article
  Year 2009 Publication International Journal of Electronic Commerce Abbreviated Journal  
  Volume 14 Issue 1 Pages 89-108  
  Keywords  
  Abstract The paper presents a factorization-based approach to make predictions in recommender systems. These systems are widely used in electronic commerce to help customers find products according to their preferences. Taking into account the customer's ratings of some products available in the system, the recommender system tries to predict the ratings the customer would give to other products in the system. The proposed factorization-based approach uses all the information provided to compute the predicted ratings, in the same way as approaches based on Singular Value Decomposition (SVD). The main advantage of this technique versus SVD-based approaches is that it can deal with missing data. It also has a smaller computational cost. Experimental results with public data sets are provided to show that the proposed adapted factorization approach gives better predicted ratings than a widely used SVD-based approach.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1086-4415 ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ JSL2009b Serial 1237  
Permanent link to this record
 

 
Author Xinhang Song; Shuqiang Jiang; Luis Herranz edit   pdf
doi  openurl
  Title Combining Models from Multiple Sources for RGB-D Scene Recognition Type Conference Article
  Year 2017 Publication 26th International Joint Conference on Artificial Intelligence Abbreviated Journal  
  Volume Issue Pages 4523-4529  
  Keywords Robotics and Vision; Vision and Perception  
  Abstract Depth can complement RGB with useful cues about object volumes and scene layout. However, RGB-D image datasets are still too small for directly training deep convolutional neural networks (CNNs), in contrast to the massive monomodal RGB datasets. Previous works in RGB-D recognition typically combine two separate networks for RGB and depth data, pretrained with a large RGB dataset and then fine tuned to the respective target RGB and depth datasets. These approaches have several limitations: 1) only use low-level filters learned from RGB data, thus not being able to exploit properly depth-specific patterns, and 2) RGB and depth features are only combined at high-levels but rarely at lower-levels. In this paper, we propose a framework that leverages both knowledge acquired from large RGB datasets together with depth-specific cues learned from the limited depth data, obtaining more effective multi-source and multi-modal representations. We propose a multi-modal combination method that selects discriminative combinations of layers from the different source models and target modalities, capturing both high-level properties of the task and intrinsic low-level properties of both modalities.  
  Address Melbourne; Australia; August 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference IJCAI  
  Notes LAMP; 600.120 Approved no  
  Call Number Admin @ si @ SJH2017b Serial 2966  
Permanent link to this record
 

 
Author Javier Vazquez; C. Alejandro Parraga; Maria Vanrell; Ramon Baldrich edit  doi
openurl 
  Title Color Constancy Algorithms: Psychophysical Evaluation on a New Dataset Type Journal Article
  Year 2009 Publication Journal of Imaging Science and Technology Abbreviated Journal  
  Volume 53 Issue 3 Pages 031105–9  
  Keywords  
  Abstract The estimation of the illuminant of a scene from a digital image has been the goal of a large amount of research in computer vision. Color constancy algorithms have dealt with this problem by defining different heuristics to select a unique solution from within the feasible set. The performance of these algorithms has shown that there is still a long way to go to globally solve this problem as a preliminary step in computer vision. In general, performance evaluation has been done by comparing the angular error between the estimated chromaticity and the chromaticity of a canonical illuminant, which is highly dependent on the image dataset. Recently, some workers have used high-level constraints to estimate illuminants; in this case selection is based on increasing the performance on the subsequent steps of the systems. In this paper we propose a new performance measure, the perceptual angular error. It evaluates the performance of a color constancy algorithm according to the perceptual preferences of humans, or naturalness (instead of the actual optimal solution) and is independent of the visual task. We show the results of a new psychophysical experiment comparing solutions from three different color constancy algorithms. Our results show that in more than a half of the judgments the preferred solution is not the one closest to the optimal solution. Our experiments were performed on a new dataset of images acquired with a calibrated camera with an attached neutral grey sphere, which better copes with the illuminant variations of the scene.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes CIC Approved no  
  Call Number CAT @ cat @ VPV2009a Serial 1171  
Permanent link to this record
 

 
Author Graham D. Finlayson; Javier Vazquez; Fufu Fang edit   pdf
doi  openurl
  Title The Discrete Cosine Maximum Ignorance Assumption Type Conference Article
  Year 2021 Publication 29th Color and Imaging Conference Abbreviated Journal  
  Volume Issue Pages 13-18  
  Keywords  
  Abstract the performance of colour correction algorithms are dependent on the reflectance sets used. Sometimes, when the testing reflectance set is changed the ranking of colour correction algorithms also changes. To remove dependence on dataset we can
make assumptions about the set of all possible reflectances. In the Maximum Ignorance with Positivity (MIP) assumption we assume that all reflectances with per wavelength values between 0 and 1 are equally likely. A weakness in the MIP is that it fails to take into account the correlation of reflectance functions between
wavelengths (many of the assumed reflectances are, in reality, not possible).
In this paper, we take the view that the maximum ignorance assumption has merit but, hitherto it has been calculated with respect to the wrong coordinate basis. Here, we propose the Discrete Cosine Maximum Ignorance assumption (DCMI), where
all reflectances that have coordinates between max and min bounds in the Discrete Cosine Basis coordinate system are equally likely.
Here, the correlation between wavelengths is encoded and this results in the set of all plausible reflectances ’looking like’ typical reflectances that occur in nature. This said the DCMI model is also a superset of all measured reflectance sets.
Experiments show that, in colour correction, adopting the DCMI results in similar colour correction performance as using a particular reflectance set.
 
  Address Virtual; November 2021  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CIC  
  Notes CIC Approved no  
  Call Number FVF2021 Serial 3596  
Permanent link to this record
 

 
Author Mirko Arnold; Stephan Ameling; Anarta Ghosh; Gerard Lacey edit  doi
openurl 
  Title Quality Improvement of Endoscopy Videos Type Conference Article
  Year 2011 Publication Proceedings of the 8th IASTED International Conference on Biomedical Engineering Abbreviated Journal  
  Volume 723 Issue Pages  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area 800 Expedition Conference  
  Notes MV Approved no  
  Call Number fernando @ fernando @ Serial 2426  
Permanent link to this record
 

 
Author Pau Rodriguez; Jordi Gonzalez; Josep M. Gonfaus; Xavier Roca edit   pdf
doi  openurl
  Title Integrating Vision and Language in Social Networks for Identifying Visual Patterns of Personality Traits Type Journal
  Year 2019 Publication International Journal of Social Science and Humanity Abbreviated Journal IJSSH  
  Volume 9 Issue 1 Pages 6-12  
  Keywords  
  Abstract Social media, as a major platform for communication and information exchange, is a rich repository of the opinions and sentiments of 2.3 billion users about a vast spectrum of topics. In this sense, user text interactions are widely used to sense the whys of certain social user’s demands and cultural- driven interests. However, the knowledge embedded in the 1.8 billion pictures which are uploaded daily in public profiles has just started to be exploited. Following this trend on visual-based social analysis, we present a novel methodology based on neural networks to build a combined image-and-text based personality trait model, trained with images posted together with words found highly correlated to specific personality traits. So, the key contribution in this work is to explore whether OCEAN personality trait modeling can be addressed based on images, here called MindPics, appearing with certain tags with psychological insights. We found that there is a correlation between posted images and the personality estimated from their accompanying texts. Thus, the experimental results are consistent with previous cyber-psychology results based on texts, suggesting that images could also be used for personality estimation: classification results on some personality traits show that specific and characteristic visual patterns emerge, in essence representing abstract concepts. These results open new avenues of research for further refining the proposed personality model under the supervision of psychology experts, and to further substitute current textual personality questionnaires by image-based ones.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ISE; 600.119 Approved no  
  Call Number Admin @ si @ RGG2019 Serial 3414  
Permanent link to this record
 

 
Author Jose Elias Yauri; M. Lagos; H. Vega-Huerta; P. de-la-Cruz; G.L.E Maquen-Niño; E. Condor-Tinoco edit  doi
openurl 
  Title Detection of Epileptic Seizures Based-on Channel Fusion and Transformer Network in EEG Recordings Type Journal Article
  Year 2023 Publication International Journal of Advanced Computer Science and Applications Abbreviated Journal IJACSA  
  Volume 14 Issue 5 Pages 1067-1074  
  Keywords Epilepsy; epilepsy detection; EEG; EEG channel fusion; convolutional neural network; self-attention  
  Abstract According to the World Health Organization, epilepsy affects more than 50 million people in the world, and specifically, 80% of them live in developing countries. Therefore, epilepsy has become among the major public issue for many governments and deserves to be engaged. Epilepsy is characterized by uncontrollable seizures in the subject due to a sudden abnormal functionality of the brain. Recurrence of epilepsy attacks change people’s lives and interferes with their daily activities. Although epilepsy has no cure, it could be mitigated with an appropriated diagnosis and medication. Usually, epilepsy diagnosis is based on the analysis of an electroencephalogram (EEG) of the patient. However, the process of searching for seizure patterns in a multichannel EEG recording is a visual demanding and time consuming task, even for experienced neurologists. Despite the recent progress in automatic recognition of epilepsy, the multichannel nature of EEG recordings still challenges current methods. In this work, a new method to detect epilepsy in multichannel EEG recordings is proposed. First, the method uses convolutions to perform channel fusion, and next, a self-attention network extracts temporal features to classify between interictal and ictal epilepsy states. The method was validated in the public CHB-MIT dataset using the k-fold cross-validation and achieved 99.74% of specificity and 99.15% of sensitivity, surpassing current approaches.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes IAM Approved no  
  Call Number Admin @ si @ Serial 3856  
Permanent link to this record
 

 
Author Penny Tarling; Mauricio Cantor; Albert Clapes; Sergio Escalera edit  doi
openurl 
  Title Deep learning with self-supervision and uncertainty regularization to count fish in underwater images Type Journal Article
  Year 2022 Publication PloS One Abbreviated Journal Plos  
  Volume 17 Issue 5 Pages e0267759  
  Keywords  
  Abstract Effective conservation actions require effective population monitoring. However, accurately counting animals in the wild to inform conservation decision-making is difficult. Monitoring populations through image sampling has made data collection cheaper, wide-reaching and less intrusive but created a need to process and analyse this data efficiently. Counting animals from such data is challenging, particularly when densely packed in noisy images. Attempting this manually is slow and expensive, while traditional computer vision methods are limited in their generalisability. Deep learning is the state-of-the-art method for many computer vision tasks, but it has yet to be properly explored to count animals. To this end, we employ deep learning, with a density-based regression approach, to count fish in low-resolution sonar images. We introduce a large dataset of sonar videos, deployed to record wild Lebranche mullet schools (Mugil liza), with a subset of 500 labelled images. We utilise abundant unlabelled data in a self-supervised task to improve the supervised counting task. For the first time in this context, by introducing uncertainty quantification, we improve model training and provide an accompanying measure of prediction uncertainty for more informed biological decision-making. Finally, we demonstrate the generalisability of our proposed counting framework through testing it on a recent benchmark dataset of high-resolution annotated underwater images from varying habitats (DeepFish). From experiments on both contrasting datasets, we demonstrate our network outperforms the few other deep learning models implemented for solving this task. By providing an open-source framework along with training data, our study puts forth an efficient deep learning template for crowd counting aquatic animals thereby contributing effective methods to assess natural populations from the ever-increasing visual data.  
  Address  
  Corporate Author Thesis  
  Publisher Public Library of Science Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA Approved no  
  Call Number Admin @ si @ TCC2022 Serial 3743  
Permanent link to this record
 

 
Author C. Alejandro Parraga; Arash Akbarinia edit   pdf
doi  openurl
  Title NICE: A Computational Solution to Close the Gap from Colour Perception to Colour Categorization Type Journal Article
  Year 2016 Publication PLoS One Abbreviated Journal Plos  
  Volume 11 Issue 3 Pages e0149538  
  Keywords  
  Abstract The segmentation of visible electromagnetic radiation into chromatic categories by the human visual system has been extensively studied from a perceptual point of view, resulting in several colour appearance models. However, there is currently a void when it comes to relate these results to the physiological mechanisms that are known to shape the pre-cortical and cortical visual pathway. This work intends to begin to fill this void by proposing a new physiologically plausible model of colour categorization based on Neural Isoresponsive Colour Ellipsoids (NICE) in the cone-contrast space defined by the main directions of the visual signals entering the visual cortex. The model was adjusted to fit psychophysical measures that concentrate on the categorical boundaries and are consistent with the ellipsoidal isoresponse surfaces of visual cortical neurons. By revealing the shape of such categorical colour regions, our measures allow for a more precise and parsimonious description, connecting well-known early visual processing mechanisms to the less understood phenomenon of colour categorization. To test the feasibility of our method we applied it to exemplary images and a popular ground-truth chart obtaining labelling results that are better than those of current state-of-the-art algorithms.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes NEUROBIT; 600.068 Approved no  
  Call Number Admin @ si @ PaA2016a Serial 2747  
Permanent link to this record
 

 
Author David Masip; Michael S. North ; Alexander Todorov; Daniel N. Osherson edit   pdf
doi  openurl
  Title Automated Prediction of Preferences Using Facial Expressions Type Journal Article
  Year 2014 Publication PloS one Abbreviated Journal Plos  
  Volume 9 Issue 2 Pages e87434  
  Keywords  
  Abstract We introduce a computer vision problem from social cognition, namely, the automated detection of attitudes from a person's spontaneous facial expressions. To illustrate the challenges, we introduce two simple algorithms designed to predict observers’ preferences between images (e.g., of celebrities) based on covert videos of the observers’ faces. The two algorithms are almost as accurate as human judges performing the same task but nonetheless far from perfect. Our approach is to locate facial landmarks, then predict preference on the basis of their temporal dynamics. The database contains 768 videos involving four different kinds of preferences. We make it publically available.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes OR;MV Approved no  
  Call Number Admin @ si @ MNT2014 Serial 2453  
Permanent link to this record
 

 
Author A.S. Coquel; J.P. Jacob; M. Primet; A. Demarez; Mariella Dimiccoli; T. Julou; L. Moisan; A. Lindner; H. Berry edit   pdf
doi  openurl
  Title Localization of protein aggregation in Escherichia coli is governed by diffusion and nucleoid macromolecular crowding effect Type Journal Article
  Year 2013 Publication Plos Computational Biology Abbreviated Journal PCB  
  Volume 9 Issue 4 Pages  
  Keywords  
  Abstract Aggregates of misfolded proteins are a hallmark of many age-related diseases. Recently, they have been linked to aging of Escherichia coli (E. coli) where protein aggregates accumulate at the old pole region of the aging bacterium. Because of the potential of E. coli as a model organism, elucidating aging and protein aggregation in this bacterium may pave the way to significant advances in our global understanding of aging. A first obstacle along this path is to decipher the mechanisms by which protein aggregates are targeted to specific intercellular locations. Here, using an integrated approach based on individual-based modeling, time-lapse fluorescence microscopy and automated image analysis, we show that the movement of aging-related protein aggregates in E. coli is purely diffusive (Brownian). Using single-particle tracking of protein aggregates in live E. coli cells, we estimated the average size and diffusion constant of the aggregates. Our results provide evidence that the aggregates passively diffuse within the cell, with diffusion constants that depend on their size in agreement with the Stokes-Einstein law. However, the aggregate displacements along the cell long axis are confined to a region that roughly corresponds to the nucleoid-free space in the cell pole, thus confirming the importance of increased macromolecular crowding in the nucleoids. We thus used 3D individual-based modeling to show that these three ingredients (diffusion, aggregation and diffusion hindrance in the nucleoids) are sufficient and necessary to reproduce the available experimental data on aggregate localization in the cells. Taken together, our results strongly support the hypothesis that the localization of aging-related protein aggregates in the poles of E. coli results from the coupling of passive diffusion-aggregation with spatially non-homogeneous macromolecular crowding. They further support the importance of “soft” intracellular structuring (based on macromolecular crowding) in diffusion-based protein localization in E. coli.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor : Stanislav Shvartsman, Princeton University, United States of America  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number Admin @ si @CJP2013 Serial 2786  
Permanent link to this record
 

 
Author Xim Cerda-Company; Xavier Otazu edit   pdf
doi  openurl
  Title Color induction in equiluminant flashed stimuli Type Journal Article
  Year 2019 Publication Journal of the Optical Society of America A Abbreviated Journal JOSA A  
  Volume 36 Issue 1 Pages 22-31  
  Keywords  
  Abstract Color induction is the influence of the surrounding color (inducer) on the perceived color of a central region. There are two different types of color induction: color contrast (the color of the central region shifts away from that of the inducer) and color assimilation (the color shifts towards the color of the inducer). Several studies on these effects have used uniform and striped surrounds, reporting color contrast and color assimilation, respectively. Other authors [J. Vis. 12(1), 22 (2012) [CrossRef] ] have studied color induction using flashed uniform surrounds, reporting that the contrast is higher for shorter flash duration. Extending their study, we present new psychophysical results using both flashed and static (i.e., non-flashed) equiluminant stimuli for both striped and uniform surrounds. Similarly to them, for uniform surround stimuli we observed color contrast, but we did not obtain the maximum contrast for the shortest (10 ms) flashed stimuli, but for 40 ms. We only observed this maximum contrast for red, green, and lime inducers, while for a purple inducer we obtained an asymptotic profile along the flash duration. For striped stimuli, we observed color assimilation only for the static (infinite flash duration) red–green surround inducers (red first inducer, green second inducer). For the other inducers’ configurations, we observed color contrast or no induction. Since other studies showed that non-equiluminant striped static stimuli induce color assimilation, our results also suggest that luminance differences could be a key factor to induce it.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes NEUROBIT; 600.120; 600.128 Approved no  
  Call Number Admin @ si @ CeO2019 Serial 3226  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: