toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Victor Vaquero; German Ros; Francesc Moreno-Noguer; Antonio Lopez; Alberto Sanfeliu edit   pdf
doi  openurl
  Title Joint coarse-and-fine reasoning for deep optical flow Type Conference Article
  Year 2017 Publication 24th International Conference on Image Processing Abbreviated Journal  
  Volume Issue Pages 2558-2562  
  Keywords  
  Abstract We propose a novel representation for dense pixel-wise estimation tasks using CNNs that boosts accuracy and reduces training time, by explicitly exploiting joint coarse-and-fine reasoning. The coarse reasoning is performed over a discrete classification space to obtain a general rough solution, while the fine details of the solution are obtained over a continuous regression space. In our approach both components are jointly estimated, which proved to be beneficial for improving estimation accuracy. Additionally, we propose a new network architecture, which combines coarse and fine components by treating the fine estimation as a refinement built on top of the coarse solution, and therefore adding details to the general prediction. We apply our approach to the challenging problem of optical flow estimation and empirically validate it against state-of-the-art CNN-based solutions trained from scratch and tested on large optical flow datasets.  
  Address Beijing; China; September 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICIP  
  Notes (up) ADAS; 600.118 Approved no  
  Call Number Admin @ si @ VRM2017 Serial 2898  
Permanent link to this record
 

 
Author Antonio Lopez; Atsushi Imiya; Tomas Pajdla; Jose Manuel Alvarez edit  isbn
openurl 
  Title Computer Vision in Vehicle Technology: Land, Sea & Air Type Book Whole
  Year 2017 Publication Abbreviated Journal  
  Volume Issue Pages 161-163  
  Keywords  
  Abstract Summary This chapter examines different vision-based commercial solutions for real-live problems related to vehicles. It is worth mentioning the recent astonishing performance of deep convolutional neural networks (DCNNs) in difficult visual tasks such as image classification, object recognition/localization/detection, and semantic segmentation. In fact,
different DCNN architectures are already being explored for low-level tasks such as optical flow and disparity computation, and higher level ones such as place recognition.
 
  Address  
  Corporate Author Thesis  
  Publisher John Wiley & Sons, Ltd Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-118-86807-2 Medium  
  Area Expedition Conference  
  Notes (up) ADAS; 600.118 Approved no  
  Call Number Admin @ si @ LIP2017a Serial 2937  
Permanent link to this record
 

 
Author Antonio Lopez; Gabriel Villalonga; Laura Sellart; German Ros; David Vazquez; Jiaolong Xu; Javier Marin; Azadeh S. Mozafari edit   pdf
url  openurl
  Title Training my car to see using virtual worlds Type Journal Article
  Year 2017 Publication Image and Vision Computing Abbreviated Journal IMAVIS  
  Volume 38 Issue Pages 102-118  
  Keywords  
  Abstract Computer vision technologies are at the core of different advanced driver assistance systems (ADAS) and will play a key role in oncoming autonomous vehicles too. One of the main challenges for such technologies is to perceive the driving environment, i.e. to detect and track relevant driving information in a reliable manner (e.g. pedestrians in the vehicle route, free space to drive through). Nowadays it is clear that machine learning techniques are essential for developing such a visual perception for driving. In particular, the standard working pipeline consists of collecting data (i.e. on-board images), manually annotating the data (e.g. drawing bounding boxes around pedestrians), learning a discriminative data representation taking advantage of such annotations (e.g. a deformable part-based model, a deep convolutional neural network), and then assessing the reliability of such representation with the acquired data. In the last two decades most of the research efforts focused on representation learning (first, designing descriptors and learning classifiers; later doing it end-to-end). Hence, collecting data and, especially, annotating it, is essential for learning good representations. While this has been the case from the very beginning, only after the disruptive appearance of deep convolutional neural networks that it became a serious issue due to their data hungry nature. In this context, the problem is that manual data annotation is a tiresome work prone to errors. Accordingly, in the late 00’s we initiated a research line consisting of training visual models using photo-realistic computer graphics, especially focusing on assisted and autonomous driving. In this paper, we summarize such a work and show how it has become a new tendency with increasing acceptance.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes (up) ADAS; 600.118 Approved no  
  Call Number Admin @ si @ LVS2017 Serial 2985  
Permanent link to this record
 

 
Author Katerine Diaz; Konstantia Georgouli; Anastasios Koidis; Jesus Martinez del Rincon edit  url
openurl 
  Title Incremental model learning for spectroscopy-based food analysis Type Journal Article
  Year 2017 Publication Chemometrics and Intelligent Laboratory Systems Abbreviated Journal CILS  
  Volume 167 Issue Pages 123-131  
  Keywords Incremental model learning; IGDCV technique; Subspace based learning; IdentificationVegetable oils; FT-IR spectroscopy  
  Abstract In this paper we propose the use of incremental learning for creating and improving multivariate analysis models in the field of chemometrics of spectral data. As main advantages, our proposed incremental subspace-based learning allows creating models faster, progressively improving previously created models and sharing them between laboratories and institutions without requiring transferring or disclosing individual spectra samples. In particular, our approach allows to improve the generalization and adaptability of previously generated models with a few new spectral samples to be applicable to real-world situations. The potential of our approach is demonstrated using vegetable oil type identification based on spectroscopic data as case study. Results show how incremental models maintain the accuracy of batch learning methodologies while reducing their computational cost and handicaps.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes (up) ADAS; 600.118 Approved no  
  Call Number Admin @ si @ DGK2017 Serial 3002  
Permanent link to this record
 

 
Author Cristhian Aguilera edit  isbn
openurl 
  Title Local feature description in cross-spectral imagery Type Book Whole
  Year 2017 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Over the last few years, the number of consumer computer vision applications has increased dramatically. Today, computer vision solutions can be found in video game consoles, smartphone applications, driving assistance – just to name a few. Ideally, we require the performance of those applications, particularly those that are safety critical to remain constant under any external environment factors, such as changes in illumination or weather conditions. However, this is not always possible or very difficult to obtain by only using visible imagery, due to the inherent limitations of the images from that spectral band. For that reason, the use of images from different or multiple spectral bands is becoming more appealing.
The aforementioned possible advantages of using images from multiples spectral bands on various vision applications make multi-spectral image processing a relevant topic for research and development. Like in visible image processing, multi-spectral image processing needs tools and algorithms to handle information from various spectral bands. Furthermore, traditional tools such as local feature detection, which is the basis of many vision tasks such as visual odometry, image registration, or structure from motion, must be adjusted or reformulated to operate under new conditions. Traditional feature detection, description, and matching methods tend to underperform in multi-spectral settings, in comparison to mono-spectral settings, due to the natural differences between each spectral band.
The work in this thesis is focused on the local feature description problem when cross-spectral images are considered. In this context, this dissertation has three main contributions. Firstly, the work starts by proposing the usage of a combination of frequency and spatial information, in a multi-scale scheme, as feature description. Evaluations of this proposal, based on classical hand-made feature descriptors, and comparisons with state of the art cross-spectral approaches help to find and understand limitations of such strategy. Secondly, different convolutional neural network (CNN) based architectures are evaluated when used to describe cross-spectral image patches. Results showed that CNN-based methods, designed to work with visible monocular images, could be successfully applied to the description of images from two different spectral bands, with just minor modifications. In this framework, a novel CNN-based network model, specifically intended to describe image patches from two different spectral bands, is proposed. This network, referred to as Q-Net, outperforms state of the art in the cross-spectral domain, including both previous hand-made solutions as well as L2 CNN-based architectures. The third contribution of this dissertation is in the cross-spectral feature description application domain. The multispectral odometry problem is tackled showing a real application of cross-spectral descriptors
In addition to the three main contributions mentioned above, in this dissertation, two different multi-spectral datasets are generated and shared with the community to be used as benchmarks for further studies.
 
  Address October 2017  
  Corporate Author Thesis Ph.D. thesis  
  Publisher Ediciones Graficas Rey Place of Publication Editor Angel Sappa  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-84-945373-6-3 Medium  
  Area Expedition Conference  
  Notes (up) ADAS; 600.118 Approved no  
  Call Number Admin @ si @ Agu2017 Serial 3020  
Permanent link to this record
 

 
Author Konstantia Georgouli; Katerine Diaz; Jesus Martinez del Rincon; Anastasios Koidis edit  openurl
  Title Building generic, easily-updatable chemometric models with harmonisation and augmentation features: The case of FTIR vegetable oils classification Type Conference Article
  Year 2017 Publication 3rd Ιnternational Conference Metrology Promoting Standardization and Harmonization in Food and Nutrition Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address Thessaloniki; Greece; October 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference IMEKOFOODS  
  Notes (up) ADAS; 600.118 Approved no  
  Call Number Admin @ si @ GDM2017 Serial 3081  
Permanent link to this record
 

 
Author Katerine Diaz; Jesus Martinez del Rincon; Aura Hernandez-Sabate edit   pdf
url  openurl
  Title Decremental generalized discriminative common vectors applied to images classification Type Journal Article
  Year 2017 Publication Knowledge-Based Systems Abbreviated Journal KBS  
  Volume 131 Issue Pages 46-57  
  Keywords Decremental learning; Generalized Discriminative Common Vectors; Feature extraction; Linear subspace methods; Classification  
  Abstract In this paper, a novel decremental subspace-based learning method called Decremental Generalized Discriminative Common Vectors method (DGDCV) is presented. The method makes use of the concept of decremental learning, which we introduce in the field of supervised feature extraction and classification. By efficiently removing unnecessary data and/or classes for a knowledge base, our methodology is able to update the model without recalculating the full projection or accessing to the previously processed training data, while retaining the previously acquired knowledge. The proposed method has been validated in 6 standard face recognition datasets, showing a considerable computational gain without compromising the accuracy of the model.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes (up) ADAS; 600.118; 600.121 Approved no  
  Call Number Admin @ si @ DMH2017a Serial 3003  
Permanent link to this record
 

 
Author Patricia Suarez; Angel Sappa; Boris X. Vintimilla edit   pdf
url  openurl
  Title Learning to Colorize Infrared Images Type Conference Article
  Year 2017 Publication 15th International Conference on Practical Applications of Agents and Multi-Agent System Abbreviated Journal  
  Volume Issue Pages  
  Keywords CNN in multispectral imaging; Image colorization  
  Abstract This paper focuses on near infrared (NIR) image colorization by using a Generative Adversarial Network (GAN) architecture model. The proposed architecture consists of two stages. Firstly, it learns to colorize the given input, resulting in a RGB image. Then, in the second stage, a discriminative model is used to estimate the probability that the generated image came from the training dataset, rather than the image automatically generated. The proposed model starts the learning process from scratch, because our set of images is very di erent from the dataset used in existing pre-trained models, so transfer learning strategies cannot be used. Infrared image colorization is an important problem when human perception need to be considered, e.g, in remote sensing applications. Experimental results with a large set of real images are provided showing the validity of the proposed approach.  
  Address Porto; Portugal; June 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference PAAMS  
  Notes (up) ADAS; MSIAU; 600.086; 600.122; 600.118 Approved no  
  Call Number Admin @ si @ Serial 2919  
Permanent link to this record
 

 
Author Patricia Suarez; Angel Sappa; Boris X. Vintimilla edit   pdf
openurl 
  Title Colorizing Infrared Images through a Triplet Conditional DCGAN Architecture Type Conference Article
  Year 2017 Publication 19th international conference on image analysis and processing Abbreviated Journal  
  Volume Issue Pages  
  Keywords CNN in Multispectral Imaging; Image Colorization  
  Abstract This paper focuses on near infrared (NIR) image colorization by using a Conditional Deep Convolutional Generative Adversarial Network (CDCGAN) architecture model. The proposed architecture is based on the usage of a conditional probabilistic generative model. Firstly, it learns to colorize the given input image, by using a triplet model architecture that tackle every channel in an independent way. In the proposed model, the nal layer of red channel consider the infrared image to enhance the details, resulting in a sharp RGB image. Then, in the second stage, a discriminative model is used to estimate the probability that the generated image came from the training dataset, rather than the image automatically generated. Experimental results with a large set of real images are provided showing the validity of the proposed approach. Additionally, the proposed approach is compared with a state of the art approach showing better results.  
  Address Catania; Italy; September 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICIAP  
  Notes (up) ADAS; MSIAU; 600.086; 600.122; 600.118 Approved no  
  Call Number Admin @ si @ SSV2017c Serial 3016  
Permanent link to this record
 

 
Author Cristhian Aguilera; Xavier Soria; Angel Sappa; Ricardo Toledo edit   pdf
openurl 
  Title RGBN Multispectral Images: a Novel Color Restoration Approach Type Conference Article
  Year 2017 Publication 15th International Conference on Practical Applications of Agents and Multi-Agent System Abbreviated Journal  
  Volume Issue Pages  
  Keywords Multispectral Imaging; Free Sensor Model; Neural Network  
  Abstract This paper describes a color restoration technique used to remove NIR information from single sensor cameras where color and near-infrared images are simultaneously acquired|referred to in the literature as RGBN images. The proposed approach is based on a neural network architecture that learns the NIR information contained in the RGBN images. The proposed approach is evaluated on real images obtained by using a pair of RGBN cameras. Additionally, qualitative comparisons with a nave color correction technique based on mean square
error minimization are provided.
 
  Address Porto; Portugal; June 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference PAAMS  
  Notes (up) ADAS; MSIAU; 600.118; 600.122 Approved no  
  Call Number Admin @ si @ ASS2017 Serial 2918  
Permanent link to this record
 

 
Author Joan Serrat; Felipe Lumbreras; Francisco Blanco; Manuel Valiente; Montserrat Lopez-Mesas edit   pdf
url  openurl
  Title myStone: A system for automatic kidney stone classification Type Journal Article
  Year 2017 Publication Expert Systems with Applications Abbreviated Journal ESA  
  Volume 89 Issue Pages 41-51  
  Keywords Kidney stone; Optical device; Computer vision; Image classification  
  Abstract Kidney stone formation is a common disease and the incidence rate is constantly increasing worldwide. It has been shown that the classification of kidney stones can lead to an important reduction of the recurrence rate. The classification of kidney stones by human experts on the basis of certain visual color and texture features is one of the most employed techniques. However, the knowledge of how to analyze kidney stones is not widespread, and the experts learn only after being trained on a large number of samples of the different classes. In this paper we describe a new device specifically designed for capturing images of expelled kidney stones, and a method to learn and apply the experts knowledge with regard to their classification. We show that with off the shelf components, a carefully selected set of features and a state of the art classifier it is possible to automate this difficult task to a good degree. We report results on a collection of 454 kidney stones, achieving an overall accuracy of 63% for a set of eight classes covering almost all of the kidney stones taxonomy. Moreover, for more than 80% of samples the real class is the first or the second most probable class according to the system, being then the patient recommendations for the two top classes similar. This is the first attempt towards the automatic visual classification of kidney stones, and based on the current results we foresee better accuracies with the increase of the dataset size.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes (up) ADAS; MSIAU; 603.046; 600.122; 600.118 Approved no  
  Call Number Admin @ si @ SLB2017 Serial 3026  
Permanent link to this record
 

 
Author David Vazquez; Jorge Bernal; F. Javier Sanchez; Gloria Fernandez Esparrach; Antonio Lopez; Adriana Romero; Michal Drozdzal; Aaron Courville edit   pdf
openurl 
  Title A Benchmark for Endoluminal Scene Segmentation of Colonoscopy Images Type Conference Article
  Year 2017 Publication 31st International Congress and Exhibition on Computer Assisted Radiology and Surgery Abbreviated Journal  
  Volume Issue Pages  
  Keywords Deep Learning; Medical Imaging  
  Abstract Colorectal cancer (CRC) is the third cause of cancer death worldwide. Currently, the standard approach to reduce CRC-related mortality is to perform regular screening in search for polyps and colonoscopy is the screening tool of choice. The main limitations of this screening procedure are polyp miss-rate and inability to perform visual assessment of polyp malignancy. These drawbacks can be reduced by designing Decision Support Systems (DSS) aiming to help clinicians in the different stages of the procedure by providing endoluminal scene segmentation. Thus, in this paper, we introduce an extended benchmark of colonoscopy image, with the hope of establishing a new strong benchmark for colonoscopy image analysis research. We provide new baselines on this dataset by training standard fully convolutional networks (FCN) for semantic segmentation and significantly outperforming, without any further post-processing, prior results in endoluminal scene segmentation.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CARS  
  Notes (up) ADAS; MV; 600.075; 600.085; 600.076; 601.281; 600.118 Approved no  
  Call Number ADAS @ adas @ VBS2017a Serial 2880  
Permanent link to this record
 

 
Author David Vazquez; Jorge Bernal; F. Javier Sanchez; Gloria Fernandez Esparrach; Antonio Lopez; Adriana Romero; Michal Drozdzal; Aaron Courville edit   pdf
url  openurl
  Title A Benchmark for Endoluminal Scene Segmentation of Colonoscopy Images Type Journal Article
  Year 2017 Publication Journal of Healthcare Engineering Abbreviated Journal JHCE  
  Volume Issue Pages 2040-2295  
  Keywords Colonoscopy images; Deep Learning; Semantic Segmentation  
  Abstract Colorectal cancer (CRC) is the third cause of cancer death world-wide. Currently, the standard approach to reduce CRC-related mortality is to perform regular screening in search for polyps and colonoscopy is the screening tool of choice. The main limitations of this screening procedure are polyp miss- rate and inability to perform visual assessment of polyp malignancy. These drawbacks can be reduced by designing Decision Support Systems (DSS) aim- ing to help clinicians in the different stages of the procedure by providing endoluminal scene segmentation. Thus, in this paper, we introduce an extended benchmark of colonoscopy image segmentation, with the hope of establishing a new strong benchmark for colonoscopy image analysis research. The proposed dataset consists of 4 relevant classes to inspect the endolumninal scene, tar- geting different clinical needs. Together with the dataset and taking advantage of advances in semantic segmentation literature, we provide new baselines by training standard fully convolutional networks (FCN). We perform a compar- ative study to show that FCN significantly outperform, without any further post-processing, prior results in endoluminal scene segmentation, especially with respect to polyp segmentation and localization.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes (up) ADAS; MV; 600.075; 600.085; 600.076; 601.281; 600.118 Approved no  
  Call Number VBS2017b Serial 2940  
Permanent link to this record
 

 
Author Ivet Rafegas edit  isbn
openurl 
  Title Color in Visual Recognition: from flat to deep representations and some biological parallelisms Type Book Whole
  Year 2017 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Visual recognition is one of the main problems in computer vision that attempts to solve image understanding by deciding what objects are in images. This problem can be computationally solved by using relevant sets of visual features, such as edges, corners, color or more complex object parts. This thesis contributes to how color features have to be represented for recognition tasks.

Image features can be extracted following two different approaches. A first approach is defining handcrafted descriptors of images which is then followed by a learning scheme to classify the content (named flat schemes in Kruger et al. (2013). In this approach, perceptual considerations are habitually used to define efficient color features. Here we propose a new flat color descriptor based on the extension of color channels to boost the representation of spatio-chromatic contrast that surpasses state-of-the-art approaches. However, flat schemes present a lack of generality far away from the capabilities of biological systems. A second approach proposes evolving these flat schemes into a hierarchical process, like in the visual cortex. This includes an automatic process to learn optimal features. These deep schemes, and more specifically Convolutional Neural Networks (CNNs), have shown an impressive performance to solve various vision problems. However, there is a lack of understanding about the internal representation obtained, as a result of automatic learning. In this thesis we propose a new methodology to explore the internal representation of trained CNNs by defining the Neuron Feature as a visualization of the intrinsic features encoded in each individual neuron. Additionally, and inspired by physiological techniques, we propose to compute different neuron selectivity indexes (e.g., color, class, orientation or symmetry, amongst others) to label and classify the full CNN neuron population to understand learned representations.

Finally, using the proposed methodology, we show an in-depth study on how color is represented on a specific CNN, trained for object recognition, that competes with primate representational abilities (Cadieu et al (2014)). We found several parallelisms with biological visual systems: (a) a significant number of color selectivity neurons throughout all the layers; (b) an opponent and low frequency representation of color oriented edges and a higher sampling of frequency selectivity in brightness than in color in 1st layer like in V1; (c) a higher sampling of color hue in the second layer aligned to observed hue maps in V2; (d) a strong color and shape entanglement in all layers from basic features in shallower layers (V1 and V2) to object and background shapes in deeper layers (V4 and IT); and (e) a strong correlation between neuron color selectivities and color dataset bias.
 
  Address November 2017  
  Corporate Author Thesis Ph.D. thesis  
  Publisher Ediciones Graficas Rey Place of Publication Editor Maria Vanrell  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-84-945373-7-0 Medium  
  Area Expedition Conference  
  Notes (up) CIC Approved no  
  Call Number Admin @ si @ Raf2017 Serial 3100  
Permanent link to this record
 

 
Author Ivet Rafegas; Javier Vazquez; Robert Benavente; Maria Vanrell; Susana Alvarez edit  url
openurl 
  Title Enhancing spatio-chromatic representation with more-than-three color coding for image description Type Journal Article
  Year 2017 Publication Journal of the Optical Society of America A Abbreviated Journal JOSA A  
  Volume 34 Issue 5 Pages 827-837  
  Keywords  
  Abstract Extraction of spatio-chromatic features from color images is usually performed independently on each color channel. Usual 3D color spaces, such as RGB, present a high inter-channel correlation for natural images. This correlation can be reduced using color-opponent representations, but the spatial structure of regions with small color differences is not fully captured in two generic Red-Green and Blue-Yellow channels. To overcome these problems, we propose a new color coding that is adapted to the specific content of each image. Our proposal is based on two steps: (a) setting the number of channels to the number of distinctive colors we find in each image (avoiding the problem of channel correlation), and (b) building a channel representation that maximizes contrast differences within each color channel (avoiding the problem of low local contrast). We call this approach more-than-three color coding (MTT) to enhance the fact that the number of channels is adapted to the image content. The higher color complexity an image has, the more channels can be used to represent it. Here we select distinctive colors as the most predominant in the image, which we call color pivots, and we build the new color coding using these color pivots as a basis. To evaluate the proposed approach we measure its efficiency in an image categorization task. We show how a generic descriptor improves its performance at the description level when applied on the MTT coding.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes (up) CIC; 600.087 Approved no  
  Call Number Admin @ si @ RVB2017 Serial 2892  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: