Home | [1–10] << 11 12 13 >> |
Records | |||||
---|---|---|---|---|---|
Author | Robert Benavente; Maria Vanrell; Ramon Baldrich | ||||
Title | Parametric Fuzzy Sets for Automatic Color Naming | Type | Journal | ||
Year | 2008 | Publication | Journal of the Optical Society of America A | Abbreviated Journal | |
Volume | 25 | Issue | 10 | Pages | 2582–2593 |
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | CIC | Approved | no | ||
Call Number | CAT @ cat @ BVB2008 | Serial | 1004 | ||
Permanent link to this record | |||||
Author | Naila Murray; Maria Vanrell; Xavier Otazu; C. Alejandro Parraga | ||||
Title | Low-level SpatioChromatic Grouping for Saliency Estimation | Type | Journal Article | ||
Year | 2013 | Publication | IEEE Transactions on Pattern Analysis and Machine Intelligence | Abbreviated Journal | TPAMI |
Volume | 35 | Issue | 11 | Pages | 2810-2816 |
Keywords | |||||
Abstract | We propose a saliency model termed SIM (saliency by induction mechanisms), which is based on a low-level spatiochromatic model that has successfully predicted chromatic induction phenomena. In so doing, we hypothesize that the low-level visual mechanisms that enhance or suppress image detail are also responsible for making some image regions more salient. Moreover, SIM adds geometrical grouplets to enhance complex low-level features such as corners, and suppress relatively simpler features such as edges. Since our model has been fitted on psychophysical chromatic induction data, it is largely nonparametric. SIM outperforms state-of-the-art methods in predicting eye fixations on two datasets and using two metrics. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0162-8828 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | CIC; 600.051; 600.052; 605.203 | Approved | no | ||
Call Number | Admin @ si @ MVO2013 | Serial | 2289 | ||
Permanent link to this record | |||||
Author | Rahat Khan; Joost Van de Weijer; Fahad Shahbaz Khan; Damien Muselet; christophe Ducottet; Cecile Barat | ||||
Title | Discriminative Color Descriptors | Type | Conference Article | ||
Year | 2013 | Publication | IEEE Conference on Computer Vision and Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 2866 - 2873 | ||
Keywords | |||||
Abstract | Color description is a challenging task because of large variations in RGB values which occur due to scene accidental events, such as shadows, shading, specularities, illuminant color changes, and changes in viewing geometry. Traditionally, this challenge has been addressed by capturing the variations in physics-based models, and deriving invariants for the undesired variations. The drawback of this approach is that sets of distinguishable colors in the original color space are mapped to the same value in the photometric invariant space. This results in a drop of discriminative power of the color description. In this paper we take an information theoretic approach to color description. We cluster color values together based on their discriminative power in a classification problem. The clustering has the explicit objective to minimize the drop of mutual information of the final representation. We show that such a color description automatically learns a certain degree of photometric invariance. We also show that a universal color representation, which is based on other data sets than the one at hand, can obtain competing performance. Experiments show that the proposed descriptor outperforms existing photometric invariants. Furthermore, we show that combined with shape description these color descriptors obtain excellent results on four challenging datasets, namely, PASCAL VOC 2007, Flowers-102, Stanford dogs-120 and Birds-200. | ||||
Address | Portland; Oregon; June 2013 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1063-6919 | ISBN | Medium | ||
Area | Expedition | Conference | CVPR | ||
Notes | CIC; 600.048 | Approved | no | ||
Call Number | Admin @ si @ KWK2013a | Serial | 2262 | ||
Permanent link to this record | |||||
Author | Josep M. Gonfaus; Xavier Boix; Joost Van de Weijer; Andrew Bagdanov; Joan Serrat; Jordi Gonzalez | ||||
Title | Harmony Potentials for Joint Classification and Segmentation | Type | Conference Article | ||
Year | 2010 | Publication | 23rd IEEE Conference on Computer Vision and Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 3280–3287 | ||
Keywords | |||||
Abstract | Hierarchical conditional random fields have been successfully applied to object segmentation. One reason is their ability to incorporate contextual information at different scales. However, these models do not allow multiple labels to be assigned to a single node. At higher scales in the image, this yields an oversimplified model, since multiple classes can be reasonable expected to appear within one region. This simplified model especially limits the impact that observations at larger scales may have on the CRF model. Neglecting the information at larger scales is undesirable since class-label estimates based on these scales are more reliable than at smaller, noisier scales. To address this problem, we propose a new potential, called harmony potential, which can encode any possible combination of class labels. We propose an effective sampling strategy that renders tractable the underlying optimization problem. Results show that our approach obtains state-of-the-art results on two challenging datasets: Pascal VOC 2009 and MSRC-21. | ||||
Address | San Francisco CA, USA | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1063-6919 | ISBN | 978-1-4244-6984-0 | Medium | |
Area | Expedition | Conference | CVPR | ||
Notes | ADAS;CIC;ISE | Approved | no | ||
Call Number | ADAS @ adas @ GBW2010 | Serial | 1296 | ||
Permanent link to this record | |||||
Author | Fahad Shahbaz Khan; Muhammad Anwer Rao; Joost Van de Weijer; Andrew Bagdanov; Maria Vanrell; Antonio Lopez | ||||
Title | Color Attributes for Object Detection | Type | Conference Article | ||
Year | 2012 | Publication | 25th IEEE Conference on Computer Vision and Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 3306-3313 | ||
Keywords | pedestrian detection | ||||
Abstract | State-of-the-art object detectors typically use shape information as a low level feature representation to capture the local structure of an object. This paper shows that early fusion of shape and color, as is popular in image classification,
leads to a significant drop in performance for object detection. Moreover, such approaches also yields suboptimal results for object categories with varying importance of color and shape. In this paper we propose the use of color attributes as an explicit color representation for object detection. Color attributes are compact, computationally efficient, and when combined with traditional shape features provide state-ofthe- art results for object detection. Our method is tested on the PASCAL VOC 2007 and 2009 datasets and results clearly show that our method improves over state-of-the-art techniques despite its simplicity. We also introduce a new dataset consisting of cartoon character images in which color plays a pivotal role. On this dataset, our approach yields a significant gain of 14% in mean AP over conventional state-of-the-art methods. |
||||
Address | Providence; Rhode Island; USA; | ||||
Corporate Author | Thesis | ||||
Publisher | IEEE Xplore | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1063-6919 | ISBN | 978-1-4673-1226-4 | Medium | |
Area | Expedition | Conference | CVPR | ||
Notes | ADAS; CIC; | Approved | no | ||
Call Number | Admin @ si @ KRW2012 | Serial | 1935 | ||
Permanent link to this record | |||||
Author | Fahad Shahbaz Khan; Joost Van de Weijer; Muhammad Anwer Rao; Michael Felsberg; Carlo Gatta | ||||
Title | Semantic Pyramids for Gender and Action Recognition | Type | Journal Article | ||
Year | 2014 | Publication | IEEE Transactions on Image Processing | Abbreviated Journal | TIP |
Volume | 23 | Issue | 8 | Pages | 3633-3645 |
Keywords | |||||
Abstract | Person description is a challenging problem in computer vision. We investigated two major aspects of person description: 1) gender and 2) action recognition in still images. Most state-of-the-art approaches for gender and action recognition rely on the description of a single body part, such as face or full-body. However, relying on a single body part is suboptimal due to significant variations in scale, viewpoint, and pose in real-world images. This paper proposes a semantic pyramid approach for pose normalization. Our approach is fully automatic and based on combining information from full-body, upper-body, and face regions for gender and action recognition in still images. The proposed approach does not require any annotations for upper-body and face of a person. Instead, we rely on pretrained state-of-the-art upper-body and face detectors to automatically extract semantic information of a person. Given multiple bounding boxes from each body part detector, we then propose a simple method to select the best candidate bounding box, which is used for feature extraction. Finally, the extracted features from the full-body, upper-body, and face regions are combined into a single representation for classification. To validate the proposed approach for gender recognition, experiments are performed on three large data sets namely: 1) human attribute; 2) head-shoulder; and 3) proxemics. For action recognition, we perform experiments on four data sets most used for benchmarking action recognition in still images: 1) Sports; 2) Willow; 3) PASCAL VOC 2010; and 4) Stanford-40. Our experiments clearly demonstrate that the proposed approach, despite its simplicity, outperforms state-of-the-art methods for gender and action recognition. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1057-7149 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | CIC; LAMP; 601.160; 600.074; 600.079;MILAB | Approved | no | ||
Call Number | Admin @ si @ KWR2014 | Serial | 2507 | ||
Permanent link to this record | |||||
Author | Susana Alvarez; Maria Vanrell | ||||
Title | Texton theory revisited: a bag-of-words approach to combine textons | Type | Journal Article | ||
Year | 2012 | Publication | Pattern Recognition | Abbreviated Journal | PR |
Volume | 45 | Issue | 12 | Pages | 4312-4325 |
Keywords | |||||
Abstract | The aim of this paper is to revisit an old theory of texture perception and
update its computational implementation by extending it to colour. With this in mind we try to capture the optimality of perceptual systems. This is achieved in the proposed approach by sharing well-known early stages of the visual processes and extracting low-dimensional features that perfectly encode adequate properties for a large variety of textures without needing further learning stages. We propose several descriptors in a bag-of-words framework that are derived from different quantisation models on to the feature spaces. Our perceptual features are directly given by the shape and colour attributes of image blobs, which are the textons. In this way we avoid learning visual words and directly build the vocabularies on these lowdimensionaltexton spaces. Main differences between proposed descriptors rely on how co-occurrence of blob attributes is represented in the vocabularies. Our approach overcomes current state-of-art in colour texture description which is proved in several experiments on large texture datasets. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0031-3203 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | CIC | Approved | no | ||
Call Number | Admin @ si @ AlV2012a | Serial | 2130 | ||
Permanent link to this record | |||||
Author | Jaykishan Patel; Alban Flachot; Javier Vazquez; David H. Brainard; Thomas S. A. Wallis; Marcus A. Brubaker; Richard F. Murray | ||||
Title | A deep convolutional neural network trained to infer surface reflectance is deceived by mid-level lightness illusions | Type | Journal Article | ||
Year | 2023 | Publication | Journal of Vision | Abbreviated Journal | JV |
Volume | 23 | Issue | 9 | Pages | 4817-4817 |
Keywords | |||||
Abstract | A long-standing view is that lightness illusions are by-products of strategies employed by the visual system to stabilize its perceptual representation of surface reflectance against changes in illumination. Computationally, one such strategy is to infer reflectance from the retinal image, and to base the lightness percept on this inference. CNNs trained to infer reflectance from images have proven successful at solving this problem under limited conditions. To evaluate whether these CNNs provide suitable starting points for computational models of human lightness perception, we tested a state-of-the-art CNN on several lightness illusions, and compared its behaviour to prior measurements of human performance. We trained a CNN (Yu & Smith, 2019) to infer reflectance from luminance images. The network had a 30-layer hourglass architecture with skip connections. We trained the network via supervised learning on 100K images, rendered in Blender, each showing randomly placed geometric objects (surfaces, cubes, tori, etc.), with random Lambertian reflectance patterns (solid, Voronoi, or low-pass noise), under randomized point+ambient lighting. The renderer also provided the ground-truth reflectance images required for training. After training, we applied the network to several visual illusions. These included the argyle, Koffka-Adelson, snake, White’s, checkerboard assimilation, and simultaneous contrast illusions, along with their controls where appropriate. The CNN correctly predicted larger illusions in the argyle, Koffka-Adelson, and snake images than in their controls. It also correctly predicted an assimilation effect in White's illusion. It did not, however, account for the checkerboard assimilation or simultaneous contrast effects. These results are consistent with the view that at least some lightness phenomena are by-products of a rational approach to inferring stable representations of physical properties from intrinsically ambiguous retinal images. Furthermore, they suggest that CNN models may be a promising starting point for new models of human lightness perception. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MACO; CIC | Approved | no | ||
Call Number | Admin @ si @ PFV2023 | Serial | 3890 | ||
Permanent link to this record | |||||
Author | Domicele Jonauskaite; Lucia Camenzind; C. Alejandro Parraga; Cecile N Diouf; Mathieu Mercapide Ducommun; Lauriane Müller; Melanie Norberg; Christine Mohr | ||||
Title | Colour-emotion associations in individuals with red-green colour blindness | Type | Journal Article | ||
Year | 2021 | Publication | PeerJ | Abbreviated Journal | |
Volume | 9 | Issue | Pages | e11180 | |
Keywords | Affect; Chromotherapy; Colour cognition; Colour vision deficiency; Cross-modal correspondences; Daltonism; Deuteranopia; Dichromatic; Emotion; Protanopia. | ||||
Abstract | Colours and emotions are associated in languages and traditions. Some of us may convey sadness by saying feeling blue or by wearing black clothes at funerals. The first example is a conceptual experience of colour and the second example is an immediate perceptual experience of colour. To investigate whether one or the other type of experience more strongly drives colour-emotion associations, we tested 64 congenitally red-green colour-blind men and 66 non-colour-blind men. All participants associated 12 colours, presented as terms or patches, with 20 emotion concepts, and rated intensities of the associated emotions. We found that colour-blind and non-colour-blind men associated similar emotions with colours, irrespective of whether colours were conveyed via terms (r = .82) or patches (r = .80). The colour-emotion associations and the emotion intensities were not modulated by participants' severity of colour blindness. Hinting at some additional, although minor, role of actual colour perception, the consistencies in associations for colour terms and patches were higher in non-colour-blind than colour-blind men. Together, these results suggest that colour-emotion associations in adults do not require immediate perceptual colour experiences, as conceptual experiences are sufficient. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | CIC; LAMP; 600.120; 600.128 | Approved | no | ||
Call Number | Admin @ si @ JCP2021 | Serial | 3564 | ||
Permanent link to this record | |||||
Author | Bojana Gajic; Ramon Baldrich | ||||
Title | Cross-domain fashion image retrieval | Type | Conference Article | ||
Year | 2018 | Publication | CVPR 2018 Workshop on Women in Computer Vision (WiCV 2018, 4th Edition) | Abbreviated Journal | |
Volume | Issue | Pages | 19500-19502 | ||
Keywords | |||||
Abstract | Cross domain image retrieval is a challenging task that implies matching images from one domain to their pairs from another domain. In this paper we focus on fashion image retrieval, which involves matching an image of a fashion item taken by users, to the images of the same item taken in controlled condition, usually by professional photographer. When facing this problem, we have different products
in train and test time, and we use triplet loss to train the network. We stress the importance of proper training of simple architecture, as well as adapting general models to the specific task. |
||||
Address | Salt Lake City, USA; 22 June 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPRW | ||
Notes | CIC; 600.087 | Approved | no | ||
Call Number | Admin @ si @ | Serial | 3709 | ||
Permanent link to this record | |||||
Author | Javier Vazquez; C. Alejandro Parraga; Maria Vanrell; Ramon Baldrich | ||||
Title | Color Constancy Algorithms: Psychophysical Evaluation on a New Dataset | Type | Journal Article | ||
Year | 2009 | Publication | Journal of Imaging Science and Technology | Abbreviated Journal | |
Volume | 53 | Issue | 3 | Pages | 031105–9 |
Keywords | |||||
Abstract | The estimation of the illuminant of a scene from a digital image has been the goal of a large amount of research in computer vision. Color constancy algorithms have dealt with this problem by defining different heuristics to select a unique solution from within the feasible set. The performance of these algorithms has shown that there is still a long way to go to globally solve this problem as a preliminary step in computer vision. In general, performance evaluation has been done by comparing the angular error between the estimated chromaticity and the chromaticity of a canonical illuminant, which is highly dependent on the image dataset. Recently, some workers have used high-level constraints to estimate illuminants; in this case selection is based on increasing the performance on the subsequent steps of the systems. In this paper we propose a new performance measure, the perceptual angular error. It evaluates the performance of a color constancy algorithm according to the perceptual preferences of humans, or naturalness (instead of the actual optimal solution) and is independent of the visual task. We show the results of a new psychophysical experiment comparing solutions from three different color constancy algorithms. Our results show that in more than a half of the judgments the preferred solution is not the one closest to the optimal solution. Our experiments were performed on a new dataset of images acquired with a calibrated camera with an attached neutral grey sphere, which better copes with the illuminant variations of the scene. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | CIC | Approved | no | ||
Call Number | CAT @ cat @ VPV2009a | Serial | 1171 | ||
Permanent link to this record | |||||
Author | C. Alejandro Parraga; Robert Benavente; Maria Vanrell; Ramon Baldrich | ||||
Title | Psychophysical measurements to model inter-colour regions of colour-naming space | Type | Journal Article | ||
Year | 2009 | Publication | Journal of Imaging Science and Technology | Abbreviated Journal | |
Volume | 53 | Issue | 3 | Pages | 031106 (8 pages) |
Keywords | image processing; Analysis | ||||
Abstract | JCR Impact Factor 2009: 0.391
In this paper, we present a fuzzy-set of parametric functions which segment the CIE lab space into eleven regions which correspond to the group of common universal categories present in all evolved languages as identified by anthropologists and linguists. The set of functions is intended to model a color-name assignment task by humans and differs from other models in its emphasis on the inter-color boundary regions, which were explicitly measured by means of a psychophysics experiment. In our particular implementation, the CIE lab space was segmented into eleven color categories using a Triple Sigmoid as the fuzzy sets basis, whose parameters are included in this paper. The model’s parameters were adjusted according to the psychophysical results of a yes/no discrimination paradigm where observers had to choose (English) names for isoluminant colors belonging to regions in-between neighboring categories. These colors were presented on a calibrated CRT monitor (14-bit x 3 precision). The experimental results show that inter- color boundary regions are much less defined than expected and color samples other than those near the most representatives are needed to define the position and shape of boundaries between categories. The extended set of model parameters is given as a table. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | CIC | Approved | no | ||
Call Number | CAT @ cat @ PBV2009 | Serial | 1157 | ||
Permanent link to this record | |||||
Author | Olivier Penacchio; Xavier Otazu; Laura Dempere-Marco | ||||
Title | A Neurodynamical Model of Brightness Induction in V1 | Type | Journal Article | ||
Year | 2013 | Publication | PloS ONE | Abbreviated Journal | Plos |
Volume | 8 | Issue | 5 | Pages | e64086 |
Keywords | |||||
Abstract | Brightness induction is the modulation of the perceived intensity of an area by the luminance of surrounding areas. Recent neurophysiological evidence suggests that brightness information might be explicitly represented in V1, in contrast to the more common assumption that the striate cortex is an area mostly responsive to sensory information. Here we investigate possible neural mechanisms that offer a plausible explanation for such phenomenon. To this end, a neurodynamical model which is based on neurophysiological evidence and focuses on the part of V1 responsible for contextual influences is presented. The proposed computational model successfully accounts for well known psychophysical effects for static contexts and also for brightness induction in dynamic contexts defined by modulating the luminance of surrounding areas. This work suggests that intra-cortical interactions in V1 could, at least partially, explain brightness induction effects and reveals how a common general architecture may account for several different fundamental processes, such as visual saliency and brightness induction, which emerge early in the visual processing pathway. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | CIC | Approved | no | ||
Call Number | Admin @ si @ POD2013 | Serial | 2242 | ||
Permanent link to this record |