Home | << 1 2 3 4 5 6 7 8 9 10 >> [11–13] |
![]() |
Records | Links | |||||
---|---|---|---|---|---|---|
Author | Xavier Roca; Jordi Vitria; Maria Vanrell; Juan J. Villanueva |
![]() ![]() |
||||
Title | Gaze control in a binocular robot systems | Type | Miscellaneous | |||
Year | 1999 | Publication ![]() |
Abbreviated Journal | |||
Volume | Issue | Pages | ||||
Keywords | ||||||
Abstract | ||||||
Address | Barcelona | |||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | ISBN | Medium | ||||
Area | Expedition | Conference | ||||
Notes | OR;ISE;CIC;MV | Approved | no | |||
Call Number | BCNPCL @ bcnpcl @ RVV1999b | Serial | 41 | |||
Permanent link to this record | ||||||
Author | Robert Benavente; M.C. Olive; Maria Vanrell; Ramon Baldrich |
![]() ![]() |
||||
Title | Colour Perception: A Simple Method for Colour Naming. | Type | Miscellaneous | |||
Year | 1999 | Publication ![]() |
Abbreviated Journal | |||
Volume | Issue | Pages | ||||
Keywords | ||||||
Abstract | ||||||
Address | Girona | |||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | ISBN | Medium | ||||
Area | Expedition | Conference | ||||
Notes | CIC | Approved | no | |||
Call Number | CAT @ cat @ BOV1999 | Serial | 47 | |||
Permanent link to this record | ||||||
Author | Eduard Vazquez; Joost Van de Weijer; Ramon Baldrich |
![]() ![]() |
||||
Title | Image Segmentation in the Presence of Shadows and Highligts | Type | Conference Article | |||
Year | 2008 | Publication ![]() |
10th European Conference on Computer Vision | Abbreviated Journal | ||
Volume | 5305 | Issue | Pages | 1–14 | ||
Keywords | ||||||
Abstract | ||||||
Address | Marseille (France) | |||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | LNCS | |||
Series Volume | Series Issue | Edition | ||||
ISSN | ISBN | Medium | ||||
Area | Expedition | Conference | ECCV | |||
Notes | CAT;CIC | Approved | no | |||
Call Number | CAT @ cat @ VVB2008b | Serial | 1013 | |||
Permanent link to this record | ||||||
Author | Fahad Shahbaz Khan; Joost Van de Weijer; Maria Vanrell |
![]() ![]() ![]() ![]() ![]() |
||||
Title | Top-Down Color Attention for Object Recognition | Type | Conference Article | |||
Year | 2009 | Publication ![]() |
12th International Conference on Computer Vision | Abbreviated Journal | ||
Volume | Issue | Pages | 979 - 986 | |||
Keywords | ||||||
Abstract | Generally the bag-of-words based image representation follows a bottom-up paradigm. The subsequent stages of the process: feature detection, feature description, vocabulary construction and image representation are performed independent of the intentioned object classes to be detected. In such a framework, combining multiple cues such as shape and color often provides below-expected results. This paper presents a novel method for recognizing object categories when using multiple cues by separating the shape and color cue. Color is used to guide attention by means of a top-down category-specific attention map. The color attention map is then further deployed to modulate the shape features by taking more features from regions within an image that are likely to contain an object instance. This procedure leads to a category-specific image histogram representation for each category. Furthermore, we argue that the method combines the advantages of both early and late fusion. We compare our approach with existing methods that combine color and shape cues on three data sets containing varied importance of both cues, namely, Soccer ( color predominance), Flower (color and shape parity), and PASCAL VOC Challenge 2007 (shape predominance). The experiments clearly demonstrate that in all three data sets our proposed framework significantly outperforms the state-of-the-art methods for combining color and shape information. | |||||
Address | Kyoto, Japan | |||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | 1550-5499 | ISBN | 978-1-4244-4420-5 | Medium | ||
Area | Expedition | Conference | ICCV | |||
Notes | CIC | Approved | no | |||
Call Number | CAT @ cat @ SWV2009 | Serial | 1196 | |||
Permanent link to this record | ||||||
Author | Christophe Rigaud; Dimosthenis Karatzas; Joost Van de Weijer; Jean-Christophe Burie; Jean-Marc Ogier |
![]() ![]() ![]() ![]() |
||||
Title | An active contour model for speech balloon detection in comics | Type | Conference Article | |||
Year | 2013 | Publication ![]() |
12th International Conference on Document Analysis and Recognition | Abbreviated Journal | ||
Volume | Issue | Pages | 1240-1244 | |||
Keywords | ||||||
Abstract | Comic books constitute an important cultural heritage asset in many countries. Digitization combined with subsequent comic book understanding would enable a variety of new applications, including content-based retrieval and content retargeting. Document understanding in this domain is challenging as comics are semi-structured documents, combining semantically important graphical and textual parts. Few studies have been done in this direction. In this work we detail a novel approach for closed and non-closed speech balloon localization in scanned comic book pages, an essential step towards a fully automatic comic book understanding. The approach is compared with existing methods for closed balloon localization found in the literature and results are presented. | |||||
Address | washington; USA; August 2013 | |||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | 1520-5363 | ISBN | Medium | |||
Area | Expedition | Conference | ICDAR | |||
Notes | DAG; CIC; 600.056 | Approved | no | |||
Call Number | Admin @ si @ RKW2013a | Serial | 2260 | |||
Permanent link to this record | ||||||
Author | Alicia Fornes; Xavier Otazu; Josep Llados |
![]() ![]() ![]() ![]() |
||||
Title | Show through cancellation and image enhancement by multiresolution contrast processing | Type | Conference Article | |||
Year | 2013 | Publication ![]() |
12th International Conference on Document Analysis and Recognition | Abbreviated Journal | ||
Volume | Issue | Pages | 200-204 | |||
Keywords | ||||||
Abstract | Historical documents suffer from different types of degradation and noise such as background variation, uneven illumination or dark spots. In case of double-sided documents, another common problem is that the back side of the document usually interferes with the front side because of the transparency of the document or ink bleeding. This effect is called the show through phenomenon. Many methods are developed to solve these problems, and in the case of show-through, by scanning and matching both the front and back sides of the document. In contrast, our approach is designed to use only one side of the scanned document. We hypothesize that show-trough are low contrast components, while foreground components are high contrast ones. A Multiresolution Contrast (MC) decomposition is presented in order to estimate the contrast of features at different spatial scales. We cancel the show-through phenomenon by thresholding these low contrast components. This decomposition is also able to enhance the image removing shadowed areas by weighting spatial scales. Results show that the enhanced images improve the readability of the documents, allowing scholars both to recover unreadable words and to solve ambiguities. | |||||
Address | Washington; USA; August 2013 | |||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | 1520-5363 | ISBN | Medium | |||
Area | Expedition | Conference | ICDAR | |||
Notes | DAG; 602.006; 600.045; 600.061; 600.052;CIC | Approved | no | |||
Call Number | Admin @ si @ FOL2013 | Serial | 2241 | |||
Permanent link to this record | ||||||
Author | Shida Beigpour; Joost Van de Weijer |
![]() ![]() ![]() ![]() ![]() ![]() |
||||
Title | Object Recoloring Based on Intrinsic Image Estimation | Type | Conference Article | |||
Year | 2011 | Publication ![]() |
13th IEEE International Conference in Computer Vision | Abbreviated Journal | ||
Volume | Issue | Pages | 327 - 334 | |||
Keywords | ||||||
Abstract | Object recoloring is one of the most popular photo-editing tasks. The problem of object recoloring is highly under-constrained, and existing recoloring methods limit their application to objects lit by a white illuminant. Application of these methods to real-world scenes lit by colored illuminants, multiple illuminants, or interreflections, results in unrealistic recoloring of objects. In this paper, we focus on the recoloring of single-colored objects presegmented from their background. The single-color constraint allows us to fit a more comprehensive physical model to the object. We demonstrate that this permits us to perform realistic recoloring of objects lit by non-white illuminants, and multiple illuminants. Moreover, the model allows for more realistic handling of illuminant alteration of the scene. Recoloring results captured by uncalibrated cameras demonstrate that the proposed framework obtains realistic recoloring for complex natural images. Furthermore we use the model to transfer color between objects and show that the results are more realistic than existing color transfer methods. | |||||
Address | Barcelona | |||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | 1550-5499 | ISBN | 978-1-4577-1101-5 | Medium | ||
Area | Expedition | Conference | ICCV | |||
Notes | CIC | Approved | no | |||
Call Number | Admin @ si @ BeW2011 | Serial | 1781 | |||
Permanent link to this record | ||||||
Author | Robert Benavente; Gemma Sanchez; Ramon Baldrich; Maria Vanrell; Josep Llados |
![]() ![]() |
||||
Title | Normalized colour segmentation for human appearance description. | Type | Miscellaneous | |||
Year | 2000 | Publication ![]() |
15 th International Conference on Pattern Recognition, 3:637–641. | Abbreviated Journal | ||
Volume | Issue | Pages | ||||
Keywords | ||||||
Abstract | ||||||
Address | Barcelona. | |||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | ISBN | Medium | ||||
Area | Expedition | Conference | ||||
Notes | DAG;CIC | Approved | no | |||
Call Number | CAT @ cat @ BSB2000 | Serial | 223 | |||
Permanent link to this record | ||||||
Author | Fahad Shahbaz Khan; Joost Van de Weijer; Sadiq Ali; Michael Felsberg |
![]() ![]() ![]() ![]() ![]() |
||||
Title | Evaluating the impact of color on texture recognition | Type | Conference Article | |||
Year | 2013 | Publication ![]() |
15th International Conference on Computer Analysis of Images and Patterns | Abbreviated Journal | ||
Volume | 8047 | Issue | Pages | 154-162 | ||
Keywords | Color; Texture; image representation | |||||
Abstract | State-of-the-art texture descriptors typically operate on grey scale images while ignoring color information. A common way to obtain a joint color-texture representation is to combine the two visual cues at the pixel level. However, such an approach provides sub-optimal results for texture categorisation task.
In this paper we investigate how to optimally exploit color information for texture recognition. We evaluate a variety of color descriptors, popular in image classification, for texture categorisation. In addition we analyze different fusion approaches to combine color and texture cues. Experiments are conducted on the challenging scenes and 10 class texture datasets. Our experiments clearly suggest that in all cases color names provide the best performance. Late fusion is the best strategy to combine color and texture. By selecting the best color descriptor with optimal fusion strategy provides a gain of 5% to 8% compared to texture alone on scenes and texture datasets. |
|||||
Address | York; UK; August 2013 | |||||
Corporate Author | Thesis | |||||
Publisher | Springer Berlin Heidelberg | Place of Publication | Editor | |||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | 0302-9743 | ISBN | 978-3-642-40260-9 | Medium | ||
Area | Expedition | Conference | CAIP | |||
Notes | CIC; 600.048 | Approved | no | |||
Call Number | Admin @ si @ KWA2013 | Serial | 2263 | |||
Permanent link to this record | ||||||
Author | Agnes Borras; Francesc Tous; Josep Llados; Maria Vanrell |
![]() ![]() ![]() |
||||
Title | High-Level Clothes Description Based on Colour-Texture and Structural Features | Type | Conference Article | |||
Year | 2003 | Publication ![]() |
1rst. Iberian Conference on Pattern Recognition and Image Analysis IbPRIA 2003 | Abbreviated Journal | ||
Volume | Issue | Pages | ||||
Keywords | ||||||
Abstract | ||||||
Address | Palma de Mallorca | |||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | ISBN | Medium | ||||
Area | Expedition | Conference | ||||
Notes | DAG;CIC | Approved | no | |||
Call Number | CAT @ cat @ BTL2003b | Serial | 369 | |||
Permanent link to this record | ||||||
Author | Rahat Khan; Joost Van de Weijer; Dimosthenis Karatzas; Damien Muselet |
![]() ![]() ![]() ![]() |
||||
Title | Towards multispectral data acquisition with hand-held devices | Type | Conference Article | |||
Year | 2013 | Publication ![]() |
20th IEEE International Conference on Image Processing | Abbreviated Journal | ||
Volume | Issue | Pages | 2053 - 2057 | |||
Keywords | Multispectral; mobile devices; color measurements | |||||
Abstract | We propose a method to acquire multispectral data with handheld devices with front-mounted RGB cameras. We propose to use the display of the device as an illuminant while the camera captures images illuminated by the red, green and
blue primaries of the display. Three illuminants and three response functions of the camera lead to nine response values which are used for reflectance estimation. Results are promising and show that the accuracy of the spectral reconstruction improves in the range from 30-40% over the spectral reconstruction based on a single illuminant. Furthermore, we propose to compute sensor-illuminant aware linear basis by discarding the part of the reflectances that falls in the sensorilluminant null-space. We show experimentally that optimizing reflectance estimation on these new basis functions decreases the RMSE significantly over basis functions that are independent to sensor-illuminant. We conclude that, multispectral data acquisition is potentially possible with consumer hand-held devices such as tablets, mobiles, and laptops, opening up applications which are currently considered to be unrealistic. |
|||||
Address | Melbourne; Australia; September 2013 | |||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | ISBN | Medium | ||||
Area | Expedition | Conference | ICIP | |||
Notes | CIC; DAG; 600.048 | Approved | no | |||
Call Number | Admin @ si @ KWK2013b | Serial | 2265 | |||
Permanent link to this record | ||||||
Author | Shida Beigpour; Marc Serra; Joost Van de Weijer; Robert Benavente; Maria Vanrell; Olivier Penacchio; Dimitris Samaras |
![]() ![]() ![]() ![]() |
||||
Title | Intrinsic Image Evaluation On Synthetic Complex Scenes | Type | Conference Article | |||
Year | 2013 | Publication ![]() |
20th IEEE International Conference on Image Processing | Abbreviated Journal | ||
Volume | Issue | Pages | 285 - 289 | |||
Keywords | ||||||
Abstract | Scene decomposition into its illuminant, shading, and reflectance intrinsic images is an essential step for scene understanding. Collecting intrinsic image groundtruth data is a laborious task. The assumptions on which the ground-truth
procedures are based limit their application to simple scenes with a single object taken in the absence of indirect lighting and interreflections. We investigate synthetic data for intrinsic image research since the extraction of ground truth is straightforward, and it allows for scenes in more realistic situations (e.g, multiple illuminants and interreflections). With this dataset we aim to motivate researchers to further explore intrinsic image decomposition in complex scenes. |
|||||
Address | Melbourne; Australia; September 2013 | |||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | ISBN | Medium | ||||
Area | Expedition | Conference | ICIP | |||
Notes | CIC; 600.048; 600.052; 600.051 | Approved | no | |||
Call Number | Admin @ si @ BSW2013 | Serial | 2264 | |||
Permanent link to this record | ||||||
Author | Susana Alvarez; Anna Salvatella; Maria Vanrell; Xavier Otazu |
![]() ![]() ![]() ![]() |
||||
Title | Perceptual color texture codebooks for retrieving in highly diverse texture datasets | Type | Conference Article | |||
Year | 2010 | Publication ![]() |
20th International Conference on Pattern Recognition | Abbreviated Journal | ||
Volume | Issue | Pages | 866–869 | |||
Keywords | ||||||
Abstract | Color and texture are visual cues of different nature, their integration in a useful visual descriptor is not an obvious step. One way to combine both features is to compute texture descriptors independently on each color channel. A second way is integrate the features at a descriptor level, in this case arises the problem of normalizing both cues. A significant progress in the last years in object recognition has provided the bag-of-words framework that again deals with the problem of feature combination through the definition of vocabularies of visual words. Inspired in this framework, here we present perceptual textons that will allow to fuse color and texture at the level of p-blobs, which is our feature detection step. Feature representation is based on two uniform spaces representing the attributes of the p-blobs. The low-dimensionality of these text on spaces will allow to bypass the usual problems of previous approaches. Firstly, no need for normalization between cues; and secondly, vocabularies are directly obtained from the perceptual properties of text on spaces without any learning step. Our proposal improve current state-of-art of color-texture descriptors in an image retrieval experiment over a highly diverse texture dataset from Corel. | |||||
Address | Istanbul (Turkey) | |||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | 1051-4651 | ISBN | 978-1-4244-7542-1 | Medium | ||
Area | Expedition | Conference | ICPR | |||
Notes | CIC | Approved | no | |||
Call Number | CAT @ cat @ ASV2010b | Serial | 1426 | |||
Permanent link to this record | ||||||
Author | Fahad Shahbaz Khan; Joost Van de Weijer; Andrew Bagdanov; Michael Felsberg |
![]() ![]() ![]() ![]() |
||||
Title | Scale Coding Bag-of-Words for Action Recognition | Type | Conference Article | |||
Year | 2014 | Publication ![]() |
22nd International Conference on Pattern Recognition | Abbreviated Journal | ||
Volume | Issue | Pages | 1514-1519 | |||
Keywords | ||||||
Abstract | Recognizing human actions in still images is a challenging problem in computer vision due to significant amount of scale, illumination and pose variation. Given the bounding box of a person both at training and test time, the task is to classify the action associated with each bounding box in an image.
Most state-of-the-art methods use the bag-of-words paradigm for action recognition. The bag-of-words framework employing a dense multi-scale grid sampling strategy is the de facto standard for feature detection. This results in a scale invariant image representation where all the features at multiple-scales are binned in a single histogram. We argue that such a scale invariant strategy is sub-optimal since it ignores the multi-scale information available with each bounding box of a person. This paper investigates alternative approaches to scale coding for action recognition in still images. We encode multi-scale information explicitly in three different histograms for small, medium and large scale visual-words. Our first approach exploits multi-scale information with respect to the image size. In our second approach, we encode multi-scale information relative to the size of the bounding box of a person instance. In each approach, the multi-scale histograms are then concatenated into a single representation for action classification. We validate our approaches on the Willow dataset which contains seven action categories: interacting with computer, photography, playing music, riding bike, riding horse, running and walking. Our results clearly suggest that the proposed scale coding approaches outperform the conventional scale invariant technique. Moreover, we show that our approach obtains promising results compared to more complex state-of-the-art methods. |
|||||
Address | Stockholm; August 2014 | |||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | ISBN | Medium | ||||
Area | Expedition | Conference | ICPR | |||
Notes | CIC; LAMP; 601.240; 600.074; 600.079 | Approved | no | |||
Call Number | Admin @ si @ KWB2014 | Serial | 2450 | |||
Permanent link to this record | ||||||
Author | Naila Murray; Luca Marchesotti; Florent Perronnin |
![]() ![]() ![]() ![]() ![]() ![]() |
||||
Title | Learning to Rank Images using Semantic and Aesthetic Labels | Type | Conference Article | |||
Year | 2012 | Publication ![]() |
23rd British Machine Vision Conference | Abbreviated Journal | ||
Volume | Issue | Pages | 110.1-110.10 | |||
Keywords | ||||||
Abstract | Most works on image retrieval from text queries have addressed the problem of retrieving semantically relevant images. However, the ability to assess the aesthetic quality of an image is an increasingly important differentiating factor for search engines. In this work, given a semantic query, we are interested in retrieving images which are semantically relevant and score highly in terms of aesthetics/visual quality. We use large-margin classifiers and rankers to learn statistical models capable of ordering images based on the aesthetic and semantic information. In particular, we compare two families of approaches: while the first one attempts to learn a single ranker which takes into account both semantic and aesthetic information, the second one learns separate semantic and aesthetic models. We carry out a quantitative and qualitative evaluation on a recently-published large-scale dataset and we show that the second family of techniques significantly outperforms the first one. | |||||
Address | Guildford, London | |||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | ISBN | 1-901725-46-4 | Medium | |||
Area | Expedition | Conference | BMVC | |||
Notes | CIC | Approved | no | |||
Call Number | Admin @ si @ MMP2012b | Serial | 2027 | |||
Permanent link to this record | ||||||
Author | Josep M. Gonfaus; Xavier Boix; Joost Van de Weijer; Andrew Bagdanov; Joan Serrat; Jordi Gonzalez |
![]() ![]() ![]() ![]() ![]() |
||||
Title | Harmony Potentials for Joint Classification and Segmentation | Type | Conference Article | |||
Year | 2010 | Publication ![]() |
23rd IEEE Conference on Computer Vision and Pattern Recognition | Abbreviated Journal | ||
Volume | Issue | Pages | 3280–3287 | |||
Keywords | ||||||
Abstract | Hierarchical conditional random fields have been successfully applied to object segmentation. One reason is their ability to incorporate contextual information at different scales. However, these models do not allow multiple labels to be assigned to a single node. At higher scales in the image, this yields an oversimplified model, since multiple classes can be reasonable expected to appear within one region. This simplified model especially limits the impact that observations at larger scales may have on the CRF model. Neglecting the information at larger scales is undesirable since class-label estimates based on these scales are more reliable than at smaller, noisier scales. To address this problem, we propose a new potential, called harmony potential, which can encode any possible combination of class labels. We propose an effective sampling strategy that renders tractable the underlying optimization problem. Results show that our approach obtains state-of-the-art results on two challenging datasets: Pascal VOC 2009 and MSRC-21. | |||||
Address | San Francisco CA, USA | |||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | 1063-6919 | ISBN | 978-1-4244-6984-0 | Medium | ||
Area | Expedition | Conference | CVPR | |||
Notes | ADAS;CIC;ISE | Approved | no | |||
Call Number | ADAS @ adas @ GBW2010 | Serial | 1296 | |||
Permanent link to this record | ||||||
Author | Ivet Rafegas; Maria Vanrell |
![]() ![]() ![]() |
||||
Title | Color spaces emerging from deep convolutional networks | Type | Conference Article | |||
Year | 2016 | Publication ![]() |
24th Color and Imaging Conference | Abbreviated Journal | ||
Volume | Issue | Pages | 225-230 | |||
Keywords | ||||||
Abstract | Award for the best interactive session
Defining color spaces that provide a good encoding of spatio-chromatic properties of color surfaces is an open problem in color science [8, 22]. Related to this, in computer vision the fusion of color with local image features has been studied and evaluated [16]. In human vision research, the cells which are selective to specific color hues along the visual pathway are also a focus of attention [7, 14]. In line with these research aims, in this paper we study how color is encoded in a deep Convolutional Neural Network (CNN) that has been trained on more than one million natural images for object recognition. These convolutional nets achieve impressive performance in computer vision, and rival the representations in human brain. In this paper we explore how color is represented in a CNN architecture that can give some intuition about efficient spatio-chromatic representations. In convolutional layers the activation of a neuron is related to a spatial filter, that combines spatio-chromatic representations. We use an inverted version of it to explore the properties. Using a series of unsupervised methods we classify different type of neurons depending on the color axes they define and we propose an index of color-selectivity of a neuron. We estimate the main color axes that emerge from this trained net and we prove that colorselectivity of neurons decreases from early to deeper layers. |
|||||
Address | San Diego; USA; November 2016 | |||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | ISBN | Medium | ||||
Area | Expedition | Conference | CIC | |||
Notes | CIC | Approved | no | |||
Call Number | Admin @ si @ RaV2016a | Serial | 2894 | |||
Permanent link to this record | ||||||
Author | Fahad Shahbaz Khan; Joost Van de Weijer; Andrew Bagdanov; Maria Vanrell |
![]() ![]() ![]() ![]() |
||||
Title | Portmanteau Vocabularies for Multi-Cue Image Representation | Type | Conference Article | |||
Year | 2011 | Publication ![]() |
25th Annual Conference on Neural Information Processing Systems | Abbreviated Journal | ||
Volume | Issue | Pages | ||||
Keywords | ||||||
Abstract | We describe a novel technique for feature combination in the bag-of-words model of image classification. Our approach builds discriminative compound words from primitive cues learned independently from training images. Our main observation is that modeling joint-cue distributions independently is more statistically robust for typical classification problems than attempting to empirically estimate the dependent, joint-cue distribution directly. We use Information theoretic vocabulary compression to find discriminative combinations of cues and the resulting vocabulary of portmanteau words is compact, has the cue binding property, and supports individual weighting of cues in the final image representation. State-of-the-art results on both the Oxford Flower-102 and Caltech-UCSD Bird-200 datasets demonstrate the effectiveness of our technique compared to other, significantly more complex approaches to multi-cue image representation | |||||
Address | ||||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | ISBN | Medium | ||||
Area | Expedition | Conference | NIPS | |||
Notes | CIC | Approved | no | |||
Call Number | Admin @ si @ KWB2011 | Serial | 1865 | |||
Permanent link to this record |