|   | 
Details
   web
Records
Author Sandra Jimenez; Xavier Otazu; Valero Laparra; Jesus Malo
Title Chromatic induction and contrast masking: similar models, different goals? Type Conference Article
Year 2013 Publication Human Vision and Electronic Imaging XVIII Abbreviated Journal
Volume 8651 Issue Pages
Keywords
Abstract Normalization of signals coming from linear sensors is an ubiquitous mechanism of neural adaptation.1 Local interaction between sensors tuned to a particular feature at certain spatial position and neighbor sensors explains a wide range of psychophysical facts including (1) masking of spatial patterns, (2) non-linearities of motion sensors, (3) adaptation of color perception, (4) brightness and chromatic induction, and (5) image quality assessment. Although the above models have formal and qualitative similarities, it does not necessarily mean that the mechanisms involved are pursuing the same statistical goal. For instance, in the case of chromatic mechanisms (disregarding spatial information), different parameters in the normalization give rise to optimal discrimination or adaptation, and different non-linearities may give rise to error minimization or component independence. In the case of spatial sensors (disregarding color information), a number of studies have pointed out the benefits of masking in statistical independence terms. However, such statistical analysis has not been performed for spatio-chromatic induction models where chromatic perception depends on spatial configuration. In this work we investigate whether successful spatio-chromatic induction models,6 increase component independence similarly as previously reported for masking models. Mutual information analysis suggests that seeking an efficient chromatic representation may explain the prevalence of induction effects in spatially simple images. © (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Address San Francisco CA; USA; February 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference HVEI
Notes (up) CIC Approved no
Call Number Admin @ si @ JOL2013 Serial 2240
Permanent link to this record
 

 
Author Ivet Rafegas
Title Exploring Low-Level Vision Models. Case Study: Saliency Prediction Type Report
Year 2013 Publication CVC Technical Report Abbreviated Journal
Volume 175 Issue Pages
Keywords
Abstract
Address
Corporate Author Thesis Master's thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes (up) CIC Approved no
Call Number Admin @ si @ Raf2013 Serial 2409
Permanent link to this record
 

 
Author Joost Van de Weijer; Fahad Shahbaz Khan
Title Fusing Color and Shape for Bag-of-Words Based Object Recognition Type Conference Article
Year 2013 Publication 4th Computational Color Imaging Workshop Abbreviated Journal
Volume 7786 Issue Pages 25-34
Keywords Object Recognition; color features; bag-of-words; image classification
Abstract In this article we provide an analysis of existing methods for the incorporation of color in bag-of-words based image representations. We propose a list of desired properties on which bases fusing methods can be compared. We discuss existing methods and indicate shortcomings of the two well-known fusing methods, namely early and late fusion. Several recent works have addressed these shortcomings by exploiting top-down information in the bag-of-words pipeline: color attention which is motivated from human vision, and Portmanteau vocabularies which are based on information theoretic compression of product vocabularies. We point out several remaining challenges in cue fusion and provide directions for future research.
Address Chiba; Japan; March 2013
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0302-9743 ISBN 978-3-642-36699-4 Medium
Area Expedition Conference CCIW
Notes (up) CIC; 600.048 Approved no
Call Number Admin @ si @ WeK2013 Serial 2283
Permanent link to this record
 

 
Author Rahat Khan; Joost Van de Weijer; Fahad Shahbaz Khan; Damien Muselet; christophe Ducottet; Cecile Barat
Title Discriminative Color Descriptors Type Conference Article
Year 2013 Publication IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal
Volume Issue Pages 2866 - 2873
Keywords
Abstract Color description is a challenging task because of large variations in RGB values which occur due to scene accidental events, such as shadows, shading, specularities, illuminant color changes, and changes in viewing geometry. Traditionally, this challenge has been addressed by capturing the variations in physics-based models, and deriving invariants for the undesired variations. The drawback of this approach is that sets of distinguishable colors in the original color space are mapped to the same value in the photometric invariant space. This results in a drop of discriminative power of the color description. In this paper we take an information theoretic approach to color description. We cluster color values together based on their discriminative power in a classification problem. The clustering has the explicit objective to minimize the drop of mutual information of the final representation. We show that such a color description automatically learns a certain degree of photometric invariance. We also show that a universal color representation, which is based on other data sets than the one at hand, can obtain competing performance. Experiments show that the proposed descriptor outperforms existing photometric invariants. Furthermore, we show that combined with shape description these color descriptors obtain excellent results on four challenging datasets, namely, PASCAL VOC 2007, Flowers-102, Stanford dogs-120 and Birds-200.
Address Portland; Oregon; June 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1063-6919 ISBN Medium
Area Expedition Conference CVPR
Notes (up) CIC; 600.048 Approved no
Call Number Admin @ si @ KWK2013a Serial 2262
Permanent link to this record
 

 
Author Fahad Shahbaz Khan; Joost Van de Weijer; Sadiq Ali; Michael Felsberg
Title Evaluating the impact of color on texture recognition Type Conference Article
Year 2013 Publication 15th International Conference on Computer Analysis of Images and Patterns Abbreviated Journal
Volume 8047 Issue Pages 154-162
Keywords Color; Texture; image representation
Abstract State-of-the-art texture descriptors typically operate on grey scale images while ignoring color information. A common way to obtain a joint color-texture representation is to combine the two visual cues at the pixel level. However, such an approach provides sub-optimal results for texture categorisation task.
In this paper we investigate how to optimally exploit color information for texture recognition. We evaluate a variety of color descriptors, popular in image classification, for texture categorisation. In addition we analyze different fusion approaches to combine color and texture cues. Experiments are conducted on the challenging scenes and 10 class texture datasets. Our experiments clearly suggest that in all cases color names provide the best performance. Late fusion is the best strategy to combine color and texture. By selecting the best color descriptor with optimal fusion strategy provides a gain of 5% to 8% compared to texture alone on scenes and texture datasets.
Address York; UK; August 2013
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0302-9743 ISBN 978-3-642-40260-9 Medium
Area Expedition Conference CAIP
Notes (up) CIC; 600.048 Approved no
Call Number Admin @ si @ KWA2013 Serial 2263
Permanent link to this record
 

 
Author Shida Beigpour; Marc Serra; Joost Van de Weijer; Robert Benavente; Maria Vanrell; Olivier Penacchio; Dimitris Samaras
Title Intrinsic Image Evaluation On Synthetic Complex Scenes Type Conference Article
Year 2013 Publication 20th IEEE International Conference on Image Processing Abbreviated Journal
Volume Issue Pages 285 - 289
Keywords
Abstract Scene decomposition into its illuminant, shading, and reflectance intrinsic images is an essential step for scene understanding. Collecting intrinsic image groundtruth data is a laborious task. The assumptions on which the ground-truth
procedures are based limit their application to simple scenes with a single object taken in the absence of indirect lighting and interreflections. We investigate synthetic data for intrinsic image research since the extraction of ground truth is straightforward, and it allows for scenes in more realistic situations (e.g, multiple illuminants and interreflections). With this dataset we aim to motivate researchers to further explore intrinsic image decomposition in complex scenes.
Address Melbourne; Australia; September 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICIP
Notes (up) CIC; 600.048; 600.052; 600.051 Approved no
Call Number Admin @ si @ BSW2013 Serial 2264
Permanent link to this record
 

 
Author Naila Murray; Maria Vanrell; Xavier Otazu; C. Alejandro Parraga
Title Low-level SpatioChromatic Grouping for Saliency Estimation Type Journal Article
Year 2013 Publication IEEE Transactions on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI
Volume 35 Issue 11 Pages 2810-2816
Keywords
Abstract We propose a saliency model termed SIM (saliency by induction mechanisms), which is based on a low-level spatiochromatic model that has successfully predicted chromatic induction phenomena. In so doing, we hypothesize that the low-level visual mechanisms that enhance or suppress image detail are also responsible for making some image regions more salient. Moreover, SIM adds geometrical grouplets to enhance complex low-level features such as corners, and suppress relatively simpler features such as edges. Since our model has been fitted on psychophysical chromatic induction data, it is largely nonparametric. SIM outperforms state-of-the-art methods in predicting eye fixations on two datasets and using two metrics.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0162-8828 ISBN Medium
Area Expedition Conference
Notes (up) CIC; 600.051; 600.052; 605.203 Approved no
Call Number Admin @ si @ MVO2013 Serial 2289
Permanent link to this record
 

 
Author Jordi Roca; C. Alejandro Parraga; Maria Vanrell
Title Chromatic settings and the structural color constancy index Type Journal Article
Year 2013 Publication Journal of Vision Abbreviated Journal JV
Volume 13 Issue 4-3 Pages 1-26
Keywords
Abstract Color constancy is usually measured by achromatic setting, asymmetric matching, or color naming paradigms, whose results are interpreted in terms of indexes and models that arguably do not capture the full complexity of the phenomenon. Here we propose a new paradigm, chromatic setting, which allows a more comprehensive characterization of color constancy through the measurement of multiple points in color space under immersive adaptation. We demonstrated its feasibility by assessing the consistency of subjects' responses over time. The paradigm was applied to two-dimensional (2-D) Mondrian stimuli under three different illuminants, and the results were used to fit a set of linear color constancy models. The use of multiple colors improved the precision of more complex linear models compared to the popular diagonal model computed from gray. Our results show that a diagonal plus translation matrix that models mechanisms other than cone gain might be best suited to explain the phenomenon. Additionally, we calculated a number of color constancy indices for several points in color space, and our results suggest that interrelations among colors are not as uniform as previously believed. To account for this variability, we developed a new structural color constancy index that takes into account the magnitude and orientation of the chromatic shift in addition to the interrelations among colors and memory effects.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes (up) CIC; 600.052; 600.051; 605.203 Approved no
Call Number Admin @ si @ RPV2013 Serial 2288
Permanent link to this record
 

 
Author Abel Gonzalez-Garcia; Robert Benavente; Olivier Penacchio; Javier Vazquez; Maria Vanrell; C. Alejandro Parraga
Title Coloresia: An Interactive Colour Perception Device for the Visually Impaired Type Book Chapter
Year 2013 Publication Multimodal Interaction in Image and Video Applications Abbreviated Journal
Volume 48 Issue Pages 47-66
Keywords
Abstract A significative percentage of the human population suffer from impairments in their capacity to distinguish or even see colours. For them, everyday tasks like navigating through a train or metro network map becomes demanding. We present a novel technique for extracting colour information from everyday natural stimuli and presenting it to visually impaired users as pleasant, non-invasive sound. This technique was implemented inside a Personal Digital Assistant (PDA) portable device. In this implementation, colour information is extracted from the input image and categorised according to how human observers segment the colour space. This information is subsequently converted into sound and sent to the user via speakers or headphones. In the original implementation, it is possible for the user to send its feedback to reconfigure the system, however several features such as these were not implemented because the current technology is limited.We are confident that the full implementation will be possible in the near future as PDA technology improves.
Address
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1868-4394 ISBN 978-3-642-35931-6 Medium
Area Expedition Conference
Notes (up) CIC; 600.052; 605.203 Approved no
Call Number Admin @ si @ GBP2013 Serial 2266
Permanent link to this record
 

 
Author Joost Van de Weijer; Fahad Shahbaz Khan; Marc Masana
Title Interactive Visual and Semantic Image Retrieval Type Book Chapter
Year 2013 Publication Multimodal Interaction in Image and Video Applications Abbreviated Journal
Volume 48 Issue Pages 31-35
Keywords
Abstract One direct consequence of recent advances in digital visual data generation and the direct availability of this information through the World-Wide Web, is a urgent demand for efficient image retrieval systems. The objective of image retrieval is to allow users to efficiently browse through this abundance of images. Due to the non-expert nature of the majority of the internet users, such systems should be user friendly, and therefore avoid complex user interfaces. In this chapter we investigate how high-level information provided by recently developed object recognition techniques can improve interactive image retrieval. Wel apply a bagof- word based image representation method to automatically classify images in a number of categories. These additional labels are then applied to improve the image retrieval system. Next to these high-level semantic labels, we also apply a low-level image description to describe the composition and color scheme of the scene. Both descriptions are incorporated in a user feedback image retrieval setting. The main objective is to show that automatic labeling of images with semantic labels can improve image retrieval results.
Address
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication Editor Angel Sappa; Jordi Vitria
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1868-4394 ISBN 978-3-642-35931-6 Medium
Area Expedition Conference
Notes (up) CIC; 605.203; 600.048 Approved no
Call Number Admin @ si @ WKC2013 Serial 2284
Permanent link to this record
 

 
Author Fahad Shahbaz Khan; Muhammad Anwer Rao; Joost Van de Weijer; Andrew Bagdanov; Antonio Lopez; Michael Felsberg
Title Coloring Action Recognition in Still Images Type Journal Article
Year 2013 Publication International Journal of Computer Vision Abbreviated Journal IJCV
Volume 105 Issue 3 Pages 205-221
Keywords
Abstract In this article we investigate the problem of human action recognition in static images. By action recognition we intend a class of problems which includes both action classification and action detection (i.e. simultaneous localization and classification). Bag-of-words image representations yield promising results for action classification, and deformable part models perform very well object detection. The representations for action recognition typically use only shape cues and ignore color information. Inspired by the recent success of color in image classification and object detection, we investigate the potential of color for action classification and detection in static images. We perform a comprehensive evaluation of color descriptors and fusion approaches for action recognition. Experiments were conducted on the three datasets most used for benchmarking action recognition in still images: Willow, PASCAL VOC 2010 and Stanford-40. Our experiments demonstrate that incorporating color information considerably improves recognition performance, and that a descriptor based on color names outperforms pure color descriptors. Our experiments demonstrate that late fusion of color and shape information outperforms other approaches on action recognition. Finally, we show that the different color–shape fusion approaches result in complementary information and combining them yields state-of-the-art performance for action classification.
Address
Corporate Author Thesis
Publisher Springer US Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0920-5691 ISBN Medium
Area Expedition Conference
Notes (up) CIC; ADAS; 600.057; 600.048 Approved no
Call Number Admin @ si @ KRW2013 Serial 2285
Permanent link to this record
 

 
Author Rahat Khan; Joost Van de Weijer; Dimosthenis Karatzas; Damien Muselet
Title Towards multispectral data acquisition with hand-held devices Type Conference Article
Year 2013 Publication 20th IEEE International Conference on Image Processing Abbreviated Journal
Volume Issue Pages 2053 - 2057
Keywords Multispectral; mobile devices; color measurements
Abstract We propose a method to acquire multispectral data with handheld devices with front-mounted RGB cameras. We propose to use the display of the device as an illuminant while the camera captures images illuminated by the red, green and
blue primaries of the display. Three illuminants and three response functions of the camera lead to nine response values which are used for reflectance estimation. Results are promising and show that the accuracy of the spectral reconstruction improves in the range from 30-40% over the spectral
reconstruction based on a single illuminant. Furthermore, we propose to compute sensor-illuminant aware linear basis by discarding the part of the reflectances that falls in the sensorilluminant null-space. We show experimentally that optimizing reflectance estimation on these new basis functions decreases
the RMSE significantly over basis functions that are independent to sensor-illuminant. We conclude that, multispectral data acquisition is potentially possible with consumer hand-held devices such as tablets, mobiles, and laptops, opening up applications which are currently considered to be unrealistic.
Address Melbourne; Australia; September 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICIP
Notes (up) CIC; DAG; 600.048 Approved no
Call Number Admin @ si @ KWK2013b Serial 2265
Permanent link to this record
 

 
Author Anjan Dutta; Josep Llados; Horst Bunke; Umapada Pal
Title A Product graph based method for dual subgraph matching applied to symbol spotting Type Conference Article
Year 2013 Publication 10th IAPR International Workshop on Graphics Recognition Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Product graph has been shown to be an efficient way for matching subgraphs. This paper reports the extension of the product graph methodology for subgraph matching applied to symbol spotting in graphical documents. This paper focuses on the two major limitations of the previous version of product graph: (1) Spurious nodes and edges in the graph representation and (2) Inefficient node and edge attributes. To deal with noisy information of vectorized graphical documents, we consider a dual graph representation on the original graph representing the graphical information and the product graph is computed between the dual graphs of the query graphs and the input graph.
The dual graph with redundant edges is helpful for efficient and tolerating encoding of the structural information of the graphical documents. The adjacency matrix of the product graph locates similar path information of two graphs and exponentiating the adjacency matrix finds similar paths of greater lengths. Nodes joining similar paths between two graphs are found by combining different exponentials of adjacency matrices. An experimental investigation reveals that the recall obtained by this approach is quite encouraging.
Address Bethlehem; PA; USA; August 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference GREC
Notes (up) DAG Approved no
Call Number Admin @ si @ DLB2013b Serial 2359
Permanent link to this record
 

 
Author Kaida Xiao; Chenyang Fu; D.Mylonas; Dimosthenis Karatzas; S. Wuerger
Title Unique Hue Data for Colour Appearance Models. Part ii: Chromatic Adaptation Transform Type Journal Article
Year 2013 Publication Color Research & Application Abbreviated Journal CRA
Volume 38 Issue 1 Pages 22-29
Keywords
Abstract Unique hue settings of 185 observers under three room-lighting conditions were used to evaluate the accuracy of full and mixed chromatic adaptation transform models of CIECAM02 in terms of unique hue reproduction. Perceptual hue shifts in CIECAM02 were evaluated for both models with no clear difference using the current Commission Internationale de l'Éclairage (CIE) recommendation for mixed chromatic adaptation ratio. Using our large dataset of unique hue data as a benchmark, an optimised parameter is proposed for chromatic adaptation under mixed illumination conditions that produces more accurate results in unique hue reproduction. © 2011 Wiley Periodicals, Inc. Col Res Appl, 2013
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes (up) DAG Approved no
Call Number Admin @ si @ XFM2013 Serial 1822
Permanent link to this record
 

 
Author L. Rothacker; Marçal Rusiñol; G.A. Fink
Title Bag-of-Features HMMs for segmentation-free word spotting in handwritten documents Type Conference Article
Year 2013 Publication 12th International Conference on Document Analysis and Recognition Abbreviated Journal
Volume Issue Pages 1305 - 1309
Keywords
Abstract Recent HMM-based approaches to handwritten word spotting require large amounts of learning samples and mostly rely on a prior segmentation of the document. We propose to use Bag-of-Features HMMs in a patch-based segmentation-free framework that are estimated by a single sample. Bag-of-Features HMMs use statistics of local image feature representatives. Therefore they can be considered as a variant of discrete HMMs allowing to model the observation of a number of features at a point in time. The discrete nature enables us to estimate a query model with only a single example of the query provided by the user. This makes our method very flexible with respect to the availability of training data. Furthermore, we are able to outperform state-of-the-art results on the George Washington dataset.
Address Washington; USA; August 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1520-5363 ISBN Medium
Area Expedition Conference ICDAR
Notes (up) DAG Approved no
Call Number Admin @ si @ RRF2013 Serial 2344
Permanent link to this record